Providing telemetry data with OPC UA on Eclipse Kura

The upcoming version 2.1.0 of Eclipse Kura™ will feature an enhanced version of the Apache Camel™ integration which was introduced in Kura 2.0.0. There are various new ways on how to run Camel routes, configured either by XML routes or using the Java DSL. Apache Camel can act as a Kura application but, new in this release, there is also a way to simply configure Camel as a “cloud service”. In past releases of Kura, applications could only push data to one cloud target. The new 2.1.0 release will add the functionality of adding multiple cloud targets and one of those targets can be Apache Camel router instances.

With Camel you can have different ways of achieving this goal, but in this post I would like to focus on the “out of the box” way, by simply configuring (not developing) a set of Camel routes, which act as cloud service. Traditional instances of cloud services in Kura are only capable of delivering data to one cloud target or subscribing to one cloud infrastructure. But using Apache Camel as a technology it is possible to connect to a bunch of technologies at the same time.

The setup

The setup will be a Kura instance, running a pre-release version of Kura 2.1.0. The final version should be out in a few weeks and won’t differ much from the current version. We will be configuring a new cloud service instance which takes Kura application payload data and provide it as OPC UA, using the Camel OPC UA adapter. As payload provider (aka Kura application) we will be using the “Example publisher” from my Kura addons project.

Open up the Kura Web UI, navigate to “Packages” and select “Install/Update”. Switch to “URL” and provide the following URL:

https://dentrassi.de/download/kura/de.dentrassi.kura.addons.example.publisher_0.1.0-SNAPSHOT.dp

Note: As an alternative you can also download the “dp” package with your desktop browser and deploy the file using the “file” upload instead of “URL”.

Adding packages to Kura
Adding packages to Kura

The installation may take a bit and it may be necessary to press the “Refresh” button in order to see the installed package. After the packages was installed you should be able to see the service “Camel example publisher” on the left side.

Now we need to install the “Milo component for Camel”. Press “Install/Update” again and enter the following URL:

https://dentrassi.de/download/kura/de.dentrassi.kura.addons.milo_0.1.0-pre1.dp

This installation will take a lot longer and you will need to check again by pressing the “Refresh” button in the Web UI.

We will also need to allow TCP access to port 12685. If you have the network managed version of Kura installed switch to the UI section “Firewall” and open a new port “12685” allowing access from “0.0.0.0/0” (Permitted Network) and press “Apply”.

A new cloud service

By default the “example publisher” will publish to the default Kura cloud service instance. We will now create a new Cloud service instance and then redirect the data to OPC UA. The data will be available as an OPC UA server. OPC UA differs between client and server. And while the Camel component does provide both ways, in this case the want others to consume our data, so offering data as an OPC UA server is the way to go.

Navigate to “Cloud Services” and press the “New” button. From the list of possible providers select org.eclipse.kura.camel.cloud.factory.CamelFactory, enter a cloud service PID (e.g. camel-opcua) and press “Create”.

After the instance has been created select it and configure it with the following options:

Router XML:

<routes xmlns="http://camel.apache.org/schema/spring">
    <route id="opc-ua-example">
       <from uri="vm:camel:example"/>
       <split>
           <simple>${body.metrics().entrySet()}</simple>
           <setHeader headerName="item">
               <simple>${body.key()}</simple>
           </setHeader>
           <setBody>
               <simple>${body.value()}</simple>
           </setBody>
           <toD uri="my-milo:${header.item}"/>
       </split>
    </route>
</routes>

Initialzation Code:

var milo = new org.apache.camel.component.milo.server.MiloServerComponent();
milo.setEnableAnonymousAuthentication(true);
camelContext.addComponent("my-milo", milo);
Screenshot of cloud service configuration
OPC UA configuration

Assigning the cloud service

Now we need to configure the example publisher to actually use our new cloud service instance. Select “Camel example publisher” from the left navigation bar and enter “opcua” (or whatever PID you used before) as “Cloud Service PID”. Apply the changes.

Testing the result

First of all, if you log in into your device using SSH, you should be able to see that port 12685 is opened:

root@raspberrypi:/home/pi# ss -nlt | grep 12685
LISTEN     0      128                      :::12685                   :::*     

Now you can connect to your device using any OPC UA explorer to the URI: opc.tcp://<my-ip>:12685

I am using Android and the “ProSYS OPC UA Client”

Summing it up

This tutorial uses a SNAPSHOT version of Eclipse Milo. Simply due to the fact that no version of Milo is released just yet. This should change in the following weeks and my play is to update the blog post once it is available. However the functionality of Milo will not change and using the Camel component, most internals of Milo are hidden anyway.

Apache Camel on Eclipse Kura can provide a complete new way of communication. This example was a rather simple one, Camel can do a lot more when it comes to processing data. And not all real-life applications may be as easy as that. But of course the intention of this blog post was to give a quick introduction into Camel and Kura in combination. Using the Camel Java DSL or the Kura Camel programmatic API can give greater flexibility. And yet, the example shows that even with a few lines of Camel XML, amazing things can be achieved.

Dropping Apache File Install into Eclipse Kura

Sometimes the simple things may be the most valuable. Testing with Eclipse Kura™ on a Raspberry Pi (or any other Eclipse Kura device) may be a bit tricky. Of course can use the Eclipse UI in combination with mToolkit. But if you want to edit, compile, deploy from a local desktop machine, to a Kura device, then you either need to click through the Web UI for uploading your application. But for this to work you also need to assembly a DP (distribution package).

But what if you could simply drop an OSGi bundle into a directory and let it get picked up by Kura automatically. Thanks to Apache File Install, there already is such a solution. File Install scans a folder and loads every OSGi bundle located in this folder. If a bundle is started and it gets overwritten in the file system, then File Install will reload and restart the bundle.

So deploying and re-deploying to a Kura device is as easy a copying a file to your target with SCP (or the remote copy tool of your choice).

And installing Apache File Install into Eclipe Kura now just got a bit simpler.

Using Maven Central

Simply navigate to the Packages section of the Eclipse Kura Web UI. Press the “Install” button and choose “URL” and enter the following URL:

https://repo1.maven.org/maven2/de/dentrassi/kura/addons/de.dentrassi.kura.addons.utils.fileinstall/0.1.0/de.dentrassi.kura.addons.utils.fileinstall-0.1.0.dp

After confirming using the Submit button it will take a bit and then File Install will be installed into your Kura installation. Sometimes it takes as bit longer than Kura expects and you need to reload the Web UI (Ctrl-R) until Kura has performed the installation.

Using the Eclipse Marketplace

Currently the Eclipse Marketplace is focused on hosting plugins for the Eclipse IDE, but this should change rather soon. But still it is possible right now to drag and drop Apache File Install into Eclipse Kura with the use of the following install button:

Drag to your running Eclipse workspace to install Apache File Install for Eclipse Kura

Dragging this button into the Kura Web UI will bring up a confirmation dialog if you want to install the addon. After confirming it will go and fetch the DP and install Apache File Install into the running Kura instance.

Deploying bundles

Now it is time to deploy some bundles. By default the directory where Apache File Install looks for bundles is /opt/eclipse/kura/load. At first this directory will not exist, so it has to be created. Next we simply fetch and example bundle using wget:

cd /opt/eclipse/kura
mkdir load
wget "https://dentrassi.de/download/kura/org.eclipse.kura.example.camel.publisher-1.0.0.jar"

This is an example publisher bundle from the upcoming Kura 2.1.0 release. So if you are still using Kura 2.0, then you can either try a different OSGi bundle, or maybe give Kura 2.1 a try 😉

Due to a regression in Kura (eclipse/kura#123) there is currently no way to manually start OSGi bundles. So you need to stop Kura and start the local command console (/opt/eclipse/kura/bin/start_kura.sh). On the OSGi shell you can then:

osgi> ss example
"Framework is launched."


id	State       Bundle
116	RESOLVED    org.eclipse.kura.example.camel.publisher_1.0.0
osgi> start 116
osgi> ss example
"Framework is launched."


id	State       Bundle
116	ACTIVE      org.eclipse.kura.example.camel.publisher_1.0.0
osgi>

Now once this initial activate has been performed, File Install and OSGi will keep the bundle active. You can re-deploy this OSGi bundle by simply copying a new version over the old one. File Install will detect the change and refresh the bundle.

There is more…

Apache File Install can also update OSGi configurations and it can be configured using a set of system properties. For the full set of options check out the Apache File Install documentation.

Eclipse Kura on Apache Karaf

It took quite a while and triggered a bunch of pull request upstream, but finally I do have Eclipse Kura™ running with Apache Karaf™.

Now of course I immediately got the question: Why is that good? After all the recent setup still uses Equinox underneath in Karaf. Now what? … isn’t Equinox and Karaf something different? Read on …

What is Karaf?

Ok, back to the basics. Eclipse Equinox™ is an implementation of OSGi. As is Apache Felix™. OSGi is a highly modular Java framework for creating modular applications. And it eats its own dogfood. So there is a thin layer of OSGi (the framework) and a bunch of “add-on” functionality (services) which is part of OSGi, but at the same time running on top of that thin OSGi layer. Now Equinox and Felix both provide this thin OSGi layer as well a set of “standard” OSGi services. But it is possible to interchange service implementations from Equinox to Felix, since both use OSGi. For example in Package Drone I started to use Equinox as OSGi container, but in some release switched to the Apache Felix ConfigurationAdmin since it didn’t have the bugs of the Equinox implementation. So the final target is a plain Equinox installation with some parts from Apache Felix.

Apache Karaf can be seen as a distribution of OSGi framework and services. Like you have a Linux distribution starting with a kernel and a set of packages which work well together, Karaf is a combination of an OSGi framework (either Felix or Equinox or one of others) and a set of add-on functionality which is tested against each other.

And those add-on services are the salt in the OSGi soup. You can have a command line shell with completion support, JMX support, OSGi blueprint, webserver and websocket suport, JAAS, SSH, JPA and way more. Installing bundles, features or complete archives directly from local sources, HTTP or Maven central. It wraps some non-OSGi dependencies automatically into OSGi bundles. And many 3rd party components are already taking advantage of this. So for example installing Apache Camel™ into an Karaf contain is done with two lines on the Karaf shell.

What Kura currently does

Eclipse Kura currently assembles its own distribution based on pure and simple Equinox components using a big Ant file. While this has an advantage when it comes to distribution size and the fact that you can control each an every aspect of that distribution, it also comes at a prize that you need to do exactly that. That Ant file needs to create each startup configuration file, knowing about each and every bundle version.

Karaf on the other hand already has a ready to run distribution in various flavors. And has some tooling to create new ones.

What is different now

In order to bring Kura to Karaf there were a few changes necessary in Kura. In addition to that a new set of Maven projects assembly a target distribution. Right now it is not possible to simply drop Kura into Karaf as a Karaf feature. A special Karaf assembly has to be prepared which can then be uploaded and installed at a specific location. This is mostly necessary due to the fact that Kura expects files at a specific location in the file system. Karaf on the other hand can simple be downloaded, unzipped and started at any location.

In addition to that a few bugs in Kura had to be fixed. Kura did make a few assumption on how Equinox and its service implementations handle things which may no longer be true in other OSGi environments. these fixes will probably make it into Kura 2.1.

Can I test it today?

If you just want to start up a stable build (not a release) you can easily do this with Docker:

sudo docker run -ti ctron/kura:karaf-stable    # stable milestone
sudo docker run -ti ctron/kura:karaf-develop   # latest develop build

This will drop you in a Karaf shell of Kura running inside docker. You can play around with the console, install some additional functionality or try out an example like OPC UA with Apache Camel.

You can also check out the branch feature/karaf in my fork of Kura which contains all the upstream patches and the Karaf distribution projects in one branch. This branch is also used to generate the docker images.

What is next?

Clean up the target assembly: Right now assembling the target platform for Karaf is not as simple as I had hoped. This is mostly due to the complexity of such a setup (mixing different variants of Linux, processors and devices) and the way Maven works. With a bit of work this can made even simpler by adding a specific Maven plugin and make one or two fixes in the Karaf Maven plugin.

Enable Network management: Right now I disabled network management for the two bare-metal Linux targets. Simply because I did have a few issues getting this running on RHEL 7 and Fedora. Things like ethernet devices which are no longer named “eth0” and stuff like that.

Bring Kura to Maven Central/More Kura bundles: These two things go hand in hand. Of course I could simply create more Karaf features, packaging more Kura bundles. But it would be much better to have Kura artifacts available on Maven Central, being able to consume them directly with Karaf. That way it would be possible to either just drop in new functionality or to create a custom Kura+Karaf distribution based on existing modules.

Will it become the default for Kura?

Hard to say. Hopefully in the future. My plan is to bring it into Kura in the 2.2 release cycle. But this also means that quite a set of dependencies has to go through the Eclipse CQ process I we want to provide Karaf ready distributions for download. A first step could be to just provide the recipes for creating your own Karaf distribution of Kura.

So the next step is to bring support for Karaf into Kura upstream.

Collecting data to OpenTSDB with Apache Camel

OpenTSDB is an open source, scalable, <buzzword>big data</buzzword> solution for storing time series data. Well, that what the name of the project actually says 😉 In combination with Grafana you can pretty easy build a few nice dashboards for visualizing that data. The only question is of course, how do you get your data into that system.

My intention was to provide a simple way to stream metrics into OpenTSDB using Apache Camel. A quick search did not bring up any existing solutions for pushing data from Apache Camel to OpenTSDB, so I decided to write a small Camel component which can pick up data and send this to OpenTSDB using the HTTP API. Of course having a plain OpenTSDB HTTP Collector API for that would be fine as well. So I did split up the component into three different modules. A generic OpenTSDB collector API, an HTTP implementation of that and finally the Apache Camel component.

All components are already available in Maven Central, and although they have the version number 0.0.3, they are working quite well 😉

<dependency>
    <groupId>de.dentrassi.iot</groupId>
    <artifactId>de.dentrassi.iot.opentsdb.collector</artifactId>
    <version>0.0.3</version>
</dependency>
<dependency>
    <groupId>de.dentrassi.iot</groupId>
    <artifactId>de.dentrassi.iot.opentsdb.collector.http</artifactId>
    <version>0.0.3</version>
</dependency>
<dependency>
    <groupId>de.dentrassi.iot</groupId>
    <artifactId>de.dentrassi.iot.opentsdb.collector.camel</artifactId>
    <version>0.0.3</version>
</dependency>

Dropping those dependencies into your classpath, or into your OSGi container ;-), you can use the following code to easily push data coming from MQTT directly into OpenTSDB:

CamelContext context = new DefaultCamelContext();

// add routes

context.addRoutes(new RouteBuilder() {

@Override
public void configure() throws Exception {
   from("paho:sensors/test2/temperature?brokerUrl=tcp://iot.eclipse.org")
      .log("${body}")
      .convertBodyTo(String.class).convertBodyTo(Float.class)
      .to("open-tsdb:http://localhost:4242#test2/value=temp");

   from("paho:tele/devices/TEMP?brokerUrl=tcp://iot.eclipse.org")
      .log("${body}")
      .convertBodyTo(String.class).convertBodyTo(Float.class)
      .to("open-tsdb:http://localhost:4242#test3/value=temp");
   }
});

// start the context
context.start();

You can directly push Floats or Integers into OpenTSDB. The example above shows a “to” component which does directly address a metric. Using #test3/value=temp If you need more tags, then those can be added using #test3/value=temp/foo=bar.

But it is also possible to have a generic endpoint and provide the metric information in the actual payload. In this case you have to use the type de.dentrassi.iot.opentsdb.collector.Data and fill in the necessary information. It is also possible to publish an array of Data[].

Camel, Kura and OSGi, struggling with ‘sun.misc.Unsafe’

So here comes a puzzle for you … You do have Apache Camel (2.17), which internally uses com.googlecode.concurrentlinkedhashmap, which uses sun.misc.Unsafe. Now you can argue a lot about this is necessary or not. I just is that way. So starting up Apache Camel in an OSGi container which does strict processing of classes, using Apache Camel will run into a “java.lang.NoClassDefFoundError” issue due to “sun/misc/Unsafe”.

The cause

The cause is rather simple. Apache Camel makes use of sun.misc and so it should declare that in the OSGi manifest. OSGi R6 (and version before that as well) defines in section “3.9.4” of the core specification that java.* is forwarded to he parent class loader, but the rest is not. So sun.misc will not go the parent class loader (which finally is the JVM) by default.

Solutions

As always, there are a few. There may be a few more possible than I describe here, but I don’t want to list any which require changing Apache Camel itself.

Fragments

Two Fragments
Two Fragments
OSGi fragments are a way to enhance an already existing OSGi bundle. So the kind of merge in into the bundle. So it is possible to create a fragment for Apache Camel which does Import-Package: sun.misc. This should quickly resolve the issue as long as the bundle is installed into you OSGi container at the same time Apache Camel is, so that it is available at the time Apache Camel is started. The host bundle has to be org.apache.camel.camel-core, since this is the bundle requiring sun.misc.Unsafe.

Of course this brings up the next issue, there is nobody who exports sun.misc. But there is again a way to fix this.

The actual provider of sun.misc is the JVM. However the JVM does not know about OSGi. But the OSGi container itself, the framework, can act as a proxy. So if the framework bundle (aka bundle zero) would export sun.misc it would be able to actually resolve the class by using the JVM bootclasspath. The solution therefore is another fragment, which performs an Export-Package: sun.misc. That will bring both bundles with their fragments together, correctly wiring up sun.misc.

But as we have seen before, the fragment requires a “host bundle” and this would be different when e.g. using Apache Felix instead of Eclipse Equinox.

Again, there is a solution. The system bundle is also know as system.bundle. So the fragment can specify system.bundle with the attribute extension:=framework as bundle host:

Manifest-Version: 1.0
Bundle-ManifestVersion: 2
Bundle-SymbolicName: my.sun.misc.provider
Bundle-Version: 1.0.0
Fragment-Host: system.bundle; extension:=framework
Export-Package: sun.misc

Of course you can also export other JVM internal packages using that way.

There are only two things to keep in mind. First of all, but is is true to all other solutions as well, if the JVM does not provide sun.misc then this won’t work. Since the class cannot be found. Second, and this is specific to this solution, if you start “camel-core” before those two fragments are installed, then you need to “refresh” the Apache Camel Core bundle in order for the OSGi framework to re-wire your imports/exports.

There are also some pre-made extension bundles for this setup. Just search maven central.

Equinox and Felix

Some setups of Felix and Equinox do provide an “out of the box” workaround. Equinox for example does automatically forward all failed class lookups to the boot class loader, as a last resort, in the case the framework is started by using the org.eclipse.equinox.launcher_*.jar instead of the org.eclipse.osgi_*.jar launcher.

Bootclasspath delegation for Equinox

Eclipse Equinox also allows to set a few system properties in order to allow falling back to the bootclasspath and delegating the lookup of “sun.misc” to the JVM:

Also see: https://wiki.eclipse.org/Equinox_Boot_Delegation

osgi.compatibility.bootdelegation=true
This fill fall back to the bootclassloader like using the launcher “org.eclipse.equinox.launcher”

Bootclasspath delegation for all

The OSGi core specification also allows to configure direct delegation of lookups to the boot classloader (Section 3.9.3 of the OSGi core specificion):

org.osgi.framework.bootdelegation=sun.misc.*
This will forward requests for “sun.misc.*” directly to the boot class loader.

Conclusion

Now people may complain “oh how complicates this OSGi-thingy is”. Well, “sun.misc.Unsafe” was never intended to be used outside the JVM. Java 9 will correct this with their module system. OSGi already can do that. But it also provides a way to solve this.

If you prefer to use system properties, a different launcher or the “two fragment” approach, that is up to you and your situation. For me the problem simply was to make it happen without changing either Apache Camel or the launcher configuration of Eclipse Kura. So I went with the “two fragments” approach.

Thanks

I am just writing this down in order to help others. And I got help from others to solve this myself. So thanks to some people who posted this “on the net”, it is a long time, I stumbled over you googling about a solutions some time ago. Sorry I forgot where I initially found this solution.

Also thanks to Neil Bartlett for pointing out the OSGi conform solution with “org.osgi.framework.bootdelegation”.

Bringing OPC UA to Apache Camel

My first two weeks at Red Hat have been quite awesome! There is a lot to learn and one the first things I checked out was Apache Camel and Eclipse Milo. While Camel is more known open source project, Eclipse Milo is a newcomer project at the Eclipse Foundation. And while it is officially still in the Incubation Phase, it is a tool you can really work with! Eclipse Milo is an OPC UA client and server implementation.

Overview

Although Apache Camel already has an OPC DA connector, based on OpenSCADA’s Utgard library (sorry, but I couldn’t resist 😉 ), OPC UA is a complete different thing. OPC DA remote connectivity is based on DCOM and has always been really painful. That’s also the reason for the name: Utgard. But with OPC UA that has been cleared up. It features different communication layers, the most prominent, up until now, is the custom, TCP based binary protocol.

I started by investigating the way Camel works and dug a little bit in the API of Eclipse Milo. From an API perspective, both tools couldn’t be more different. Where Camel does try to focus on simplicity and reducing the full complexity down to a very slim and simple interface, Milo simply unleashes the full complexity of OPC UA into a very nice, but complex, Java 8 style API. Now you can argue night and day what is the better approach, with a little bit of glue code, both sides work perfectly together.

To make it short, the final result is an open source project on GitHub: ctron/de.dentrassi.camel.milo, which features two Camel components providing OPC UA client and OPC UA server support. Meaning that it is possible now to consume or publish OPC UA value updates by simply configuring a Camel URI.

Examples

For example:

milo-client:tcp://foo:bar@localhost:12685?nodeId=items-MyItem&namespaceUri=urn:org:apache:camel

Can be used to configure an OPC UA Client connection to “localhost:12685”. Implementing both Camel producer and consumer, it is possible to subscribe/monitor this item or write value updates.

The following URI does create a server item named “MyItem”:

milo-server:MyItem

Which can then be accessed using an OPC UA client. For the server component the configuration options like bind address and port are located on the Camel component, not the endpoint. However it is possible with Camel to register the same component type with different configurations.

Also see the testing classes, which show a few examples.

What is next?

With help from @hekonsek, who knows a lot more about Camel than I do, we hope to contribute this extension to the Apache Camel project. So once Eclipse Milo has it’s first release, this could become an “out-of-the-box” experience when using Apache Camel, thanks to another wonderful project of Eclipse IoT of course 😉

Also, with a little bit of luck, there will be a talk at EclipseCon Europe 2016 about this adventure. It will even go a bit further because I do want to bring this module into Eclipse Kura, so that Eclipse Kura will feature OPC UA support using Apache Camel.

10 years

Many of my blog posts are actually a “note to self”. Things I write down in order not to forget them. This one even goes a bit further. So save yourself some time and skip it 😉

10 years ago I started an adventure [1]. Creating an open source SCADA system and a business with that. Having an open source SCADA system was not the business case back then, it was just a side effect.

So in those 10 years a lot of things haven happened. Just think back a moment, there was no “smart phone” back then. Although there was Java 5, most of us hesitated and stuck with 1.4, not having generics. 32bit was standard. Raspberry PIs had not been invented yet (though I did have a Lego Mindstorms NXT). There was no Git and you could be lucky if you had Subversion. Not talking about services like GitHub, Travis-CI or other hipster cloud things.

Also in the area that I worked, let’s just label it “SCADA”, a lot has happened. During that time we developed openSCADA, which later became an Eclipse project named “Eclipse SCADA” and now “Eclipse NeoSCADA”. And similar things happened in that industry field. Instead of having closed source, C based, isolated systems, we now have a more open field, where data is transmitted using virtual machine based systems, wrapping up data in XML or JSON structures [2], re-use code from open source projects. And quite many of those things happen in the Eclipse IoT working group and top-level project, which I am glad to be a part of.

Looking back 10 years again, Eclipse was an IDE like Apache was a web server [3]. And today both foundations govern a huge set of projects, far away from their original starting points and quite a sum of them has, in some way, to do with IoT.

So I am really looking forward to next Monday, when I will start a new position at the IoT team of RedHat. To me this is a great opportunity and a great adventure into the next 10 years. I am sure that a lot will happen in the area of IoT, communication between devices and services. And I appreciate it that I can be a part of this.

But also will I miss what I am leaving behind. Not only the software that I made and the things I have accomplished, but, more important, colleagues who became friends.

[1] To be honest, it was 10 years and 3 months ago. Then again the first three months were more preparing than actually doing.
[2] Not sure if this is an improvement, but it is the way it goes.
[3] I know it was more back then already, but not as it is now

New version of Maven RPM builder

I just released a new version of the Maven RPM builder. Version 0.6.0 allows one to influence the way the RPM release information is generated during a SNAPSHOT build (also see issue #2).

While the default behavior is still the same, it is now possible to specify the snapshotBuildId, which will then be added as release suffix instead of the current timestamp. Setting forceRelease can be used to disable the SNAPSHOT specific logic altogether and just use the provided release information.

Mattermost at Eclipse

About half a year back Cédric and I did start a test to see if Mattermost is a valuable tool for the Eclipse community. Failure was an option, since a new tool should bring some benefit to the community.

The server this instance was running on was sponsored by my current employer, IBH SYSTEMS GmbH. The test was scheduled to be terminate at the end of June. And since I will change to RedHat, starting in July, we were forced to make a decision:

🙂 Mattermost now is a permanent, community supported service of the Eclipse Foundation. Hosted on the Eclipse Foundation’s infrastructure, but supported by its community. We also dropped the “-test” in the domain name. So please update your links, if you had some. All content was migrated to the new server.

Also did we setup a new IRC bridge, which does bridge the IRC channel #eclipse-dev on freenode to the Mattermost channel developers-qa.

Cédric and I proposed a talk at for EclipseCon Europe 2016 to show the community what Mattermost is and how it can be used for engaging users and committers.

Happy chatting 😉

Writing RPM files … in plain Java … on Maven Central

A few weeks back I wrote a blog post about writing RPM files in plain Java.

What was left over was the fact that the library was not available outside of Package Drone itself. Although it was created as a stand alone functionality you would need to fetch the JAR and somehow integrate it into your build.

With the recent release of Package Drone 0.13.0 I was finally able to officially push the module to Maven Central.

<dependency>
    <groupId>org.eclipse.packagedrone</groupId>
    <artifactId>org.eclipse.packagedrone.utils.rpm</artifactId>
    <version>0.13.0</version>
</dependency>

In the meanwhile I did work on a Maven RPM builder plugin, which allows creating RPM files on any platform. The newest version (0.5.0) has been released today as well, which already uses the new library.

So working with RPM files just got a bit easier 😉