Java

19 posts

Eclipse Milo 0.3, updated examples

We while back I wrote a blog post about OPC UA, using Milo and added a bunch of examples, in order to get you started. Time passed by and now Milo 0.3.x is released, with a changed API and so those examples no longer work. Not too much has changed, but the experience of running into compile errors isn’t a good one. Finally I found some time to update the examples.

Continue reading

Using PKCS #1 PEM encoded X.509 certificates in Java

PEM is a well know file format when it comes to certificates. And when using Kubernetes (or OpenShift in my case) it is so easy to re-use the internal CA for some tasks.

Except when it comes to Java. As Java does only use JKS (its Java-only, binary keystore) or PKCS12 for keys and certificates. So Google offers you a bunch of tutorials on how to convert PEM encoded certificates to JKS or PKCS12 so that Java can consume that. But that may be ugly in a lot of situations. Doing that manually once if fine. But adding this to e.g. a pod, becomes a lengthy YAML init container setup, which seems unnecessary to me.

But Java does allow the use of security providers, which may extend the security system. However searching the net, I couldn’t find anything which would provider a PEM based KeyStore. Maybe that was simply due to the fact that the over “convert PEM to …” tutorials spammed the search results.

So I went along and simply created my own provider. For my own use case, which is using the OpenShift service CA certificate. It only took a few minutes to do the actual implementation as reading a PEM file is no mystery.

In case you need to use a PEM encoded X.509 certificate in Java, you now can either re-encode that with `openssl` on the command line or simply drop on this provider and use `PEM` as the KeyStore type:

<dependency>
  <groupId>de.dentrassi.crypto</groupId>
  <artifactId>pem-keystore</artifactId>
  <version>2.0.0</version>
</dependency>

And then:

KeyStore keyStore = KeyStore.getInstance("PEM");

For more information see: ctron/pem-keystore at GitHub

If you know some other provider which supports this, please let me know and I would be happy to switch as this is only a scratch to my itch :) On the other hand if this is useful to you, then please let me know. There are still a few things missing, like keys and Java 9+ support. But maybe you want to submit a pull request for that :D

Update I did release an update of this provider. Version 2.0 has support for keys and CA bundles.

🔗 Varlink for Java – What wonderful world it could be

Varlink for Java is a Java based implementation of the Varlink interface. This blog post shows how varlink can be used in the Java world to solve the problem of accessing operating system functionality.

Consuming operating system functionality from Java, when running on Linux, has always been a problem. There are numerous examples where people fork processes and parse the result in ways which tend to break the next time you upgrade your CLI tools. Not even thinking about switching to a different version of your favorite Linux distribution or switching to another distribution at all. Of course there have been all kinds of approaches to solve this like JNI, DBus, … Then again, the operating system is way more than the kernel and the desktop. Configuring a network time server, installing additional packages, reading the system log, … And of course in a polyglot world, all this is not necessarily exposed using a C based API.

Over the time, and thanks to Harald, I have been following the Varlink project. You can also read more about this in his recent blog post about varlink. Varlink defines itself as:

… is an interface description format and protocol that aims to make services accessible to both humans and machines in the simplest feasible way.

So let’s put that claim to a test. :-)

Quick overview

Varlink uses a socket based, client/server based approach to communicate. It support TCP but also Unix domain sockets (UDS). Although the latter is still not officially supported in Java, Netty offers a neat solution and also allows you to use the same networking API with TCP and UDS. Still, let’s go the extra mile and use UDS for this.

The protocol for communicating between client and server is rather simple. The client issues a call and waits for the result. All communication is zero-byte terminated strings, which happen to be JSON. I won’t dive into the protocol any further, it really is that simple and you can read about it at the Varlink protocol documentation anyway.

As Netty does most of the networking, GSON takes care of the JSON processing, so we can focus on the actual API we want to have. For this let’s have a closer looks at how Varlink works.

Varlink offers services aka “interfaces” to expose their functionality. Every interface does also export information about itself. Varlink interfaces actually run in different processes (or even in the Linux kernel) and do expose their functionality over different addresses (e.g. unix domain sockets or TCP addresses). Therefore a default (well known) service of Varlink is the “resolver”, which allows to you to register your service with, so that others will be able to find you. As a first step I decided to focus on the client side, consuming APIs rather then publishing them. So the steps required are simply:

  • Contact the resolver
  • Ask for the address of the required service
  • Contact the resolved address
  • Perform the actual operation

Of course talking to the resolver is using the same functionality as talking to other interfaces, as the resolver is a varlink interface itself.

A simple test

After around two to three hours I came up with the following API, contacting the varlink interface io.systemd.network, querying all the existing network interfaces of the system:

try (Varlink v = varlink()) {

  // shorter & sync way

  List<Netdev> devices1 = v
    .resolveSync(Network.class)
    .sync()
    .list();

  dump(devices1);

  // more explicit & async way

  List<Netdev> devices2 = v
    .resolver()
    .async()
    .resolve(Network.class)
    .thenCompose(network -> network.async().list())
    .get();

  dump(devices2);
}

To be honest, for this specific task, I could have also used the Java NetworkInterface API. But the same way I am querying the network interfaces with varlink, I could also access the io.systemd.journal interface or org.kernel.kmod and interface with the system log or the kernel module system.

Just for comparison you can have a look at the Eclipse Kura USB modem functionality, which needs to call a bunch of command line utilities, access lock files, call into JNI code, …

The IDL – Xtext awesomeness

If you don’t know Xtext, it is a toolchain for creating your own DSL. Living in the Eclipse modeling ecosystem, it allows you to define your DSL grammar and it will take care of creating a parser, a complete editor with code completion, syntax highlighting, support for the language server protocol and much more. It does support the Eclipse IDE, IntelliJ and plain web. And of course you can create an Xtext grammar for the Varlink IDL quite easily. After around one hour of fighting with grammars, I came up with the following editor:

Varlink IDL editor
Varlink IDL editor

As you can see, the Varlink IDL has been parsed. I am pretty sure there are still some issues with grammar, but it is quite a good start. Now everything is available in a parsed ECore model and can be visualized or transformed with any of the Eclipse Modeling tools/libraries. Creating a quick diagram editor with Eclipse Sirius is only a few more minutes away.

What is next, what is missing

Altogether this was quite easy. Varlink indeed offers a solution for accessing system services in a simplest feasible way. So, what is next?

varlink-java is already available on GitHub. I would like to clean it up a bit, add a decent build setup and publish it on Maven Central. Adding the Xtext bits in a simple way, if possible. Tycho and plain Maven builds always tend to get in each others way.

Varlink offers something called “monitoring”. Instead of getting a single reply to a call, the call can follow up with additional updates. Like changes in the device list, following on log entries, … This is currently not supported by the varlink-java API, but it is an important feature and I really would like to add it as well.

In the current example the bindings to io.systemd.network where created manually. This is fine for a first example, but combining this with the Xtext based IDL parser it would be a simple task to create a Maven plugin which creates the binding code base on the provided varlink files on the fly.

Of course, there is so much more: creating a graphical System API browser, the whole server/interface side and dozens of bindings to create.

Conclusion

Varlink is an amazing piece of technology. And mostly because it is that simple. It does offer the right functionality to solve so many issues we currently face when accessing operating system APIs. And it definitely is a candidate to get rid of all the ugly wrapper code around command line calls and other things which are currently necessary to talk to operating system functionality. And simply using plain Java functionality (at least if you go with TCP ;-) ).

Links & stuff

How to install varlink (on Fedora 27, for CentOS use “yum”):

sudo dnf copr enable "@varlink/varlink"
sudo dnf install fedora-varlink
sudo systemctl enable --now org.varlink.resolver.socket
varlink help

Kapua micro client SDK, running on a microcontroller

A few weeks back, while being at EclipseCon France, I did stumble over a nice little gadget. There was talk from MicroEJ around Java on microcontrollers. And they where showing an IoT related demo based on their development environment. And it seemed they did have Eclipse Paho (including TLS) and Google Protobuf running on their JVM without too much troubles.

ST Board with Thermocloud

My first idea was to simply drop the Kapua Gateway Client SDK on top of it, implementing the cloud facing API of MicroEJ and let their IoT demo publish data towards Kapua.
After a few days I was able to order such a STM32F746G-DISCO board myself and play a little bit around with it. It quickly turned out that is was pretty easy to drop some Java code on the device, using the gateway client SDK was not an option. The MicroEJ JVM is based on Java CDLC 8. Sounds like Java 8, right? Well, it is more like Java 7. Aside from a few classes which are missing, the core features missing where Java 8’s lambdas and enhancements to interfaces (like static methods and default methods).

Rewriting the gateway client SDK in Java 7, dropping the shiny API which we currently have, didn’t sound very appealing. But then again, implementing the Kapua communication stack actually isn’t that complicated and such an embedded device wouldn’t really need the extensibility and modularity of the Java 8 based gateway client SDK. So in a few hours there was the Kapua micro client SDK, which doesn’t consume any dependencies other than Paho and Protobuf and also only uses a minimal set of Java 7 functionality.

The second step was to implement the MicroEJ specific APIs and map the calls to the Kapua micro SDK, which wasn’t too difficult either. So now it is possible to simply install the “Kapua Data Channel Provider” from the MicroEJ Community Store. Alternatively you can compile the sources yourself as the code for this adapter is also on GitHub. Once the data channel provider is installed you can fire up any application consuming the DataChannel API, like the “Thermocloud” application, and publish data to Kapua. Please be sure to follow the installation instructions on the Kapua data channel provider for configuring the connection to your Kapua instance.

Kapua Data Channel Provider

Data from Thermocloud in Kapua

As the micro client is capable of running on Java 7, it might also be a choice for people wanting to connect from Android to Kapua without the need to go for Java 8. As Java 8 on Android still seems to be rather painful, this could be an option.

Also see:

I would like to thank Laurent and Frédéric from MicroEJ, who did help me fix all the noob-issues I had.

Simulating telemetry streams with Kapua and OpenShift

Sometimes it is necessary to have some simulated data instead of fancy sensors attached to your IoT setup. As Eclipse Kapua starts to adopt Elasticsearch, it started to seem necessary to actually unit test the inbound telemetry stream of Kapua. Data coming from the gateway, being processed by Kapua, then stored into Elasticsearch and then retrieved back from Elasticsearch over the Kapua REST API. A lot can go wrong here ;-)

The Kura simulator, which is now hosted in the Kapua repository, seemed to be right place to do this. That way we can not only test this inside Kapua, but we can also allow different use cases for simulating data streams outside of unit tests and we can leverage the existing OpenShift integration of the Kura Simulator.

The Kura simulator has the ability now to also send telemetry data. In addition to that there is a rather simple simulation model which can use existing value generators and map those to a more complex metric setup.

From a programmatic perspective creating a simple telemetry stream would look this:

GatewayConfiguration configuration = new GatewayConfiguration("tcp://kapua-broker:kapua-password@localhost:1883", "kapua-sys", "sim-1");
try (GeneratorScheduler scheduler = new GeneratorScheduler(Duration.ofSeconds(1))) {
  Set apps = new HashSet<>();
  apps.add(simpleDataApplication("data-1", scheduler, "sine", sine(ofSeconds(120), 100, 0, null)));
  try (MqttAsyncTransport transport = new MqttAsyncTransport(configuration);
       Simulator simulator = new Simulator(configuration, transport, apps);) {
      Thread.sleep(Long.MAX_VALUE);
  }
}

The Generators.simpleDataApplication creates a new Application from the provided map of functions (Map<String,Function<Instant,?>>). This is a very simple application, which reports a single metric on a single topic. The Generators.sine function returns a function which creates a sine curve using the provided parameters.

Now one might ask, why is this a Function<Instant,?>, wouldn’t a simple Supplier be enough? There is a good reason for that. The expectation of the data simulator is actually that the telemetry data is derived from the provided timestamp. This is done in order to generate predictable timestamp and values along the communication path. In this example we only have a single metric in a single instance. But it is possible to scale up the simulation to run 100 instances on 100 pods in OpenShift. In this case each simulation step in one JVM would receive the same timestamp and this each of those 100 instances should generate the same values. Sending the same timestamps upwards to Kapua. Now validating this information later on because quite easy, as you not only can measure the time delay of the transmission, but also check if there are inconsistencies in the data, gaps or other issues.

When using the SimulationRunner, it is possible to configure data generators instead of coding:

{
 "applications": {
  "example1": {
   "scheduler": { "period": 1000 },
   "topics": {
    "t1/data": {
     "positionGenerator": "spos",
     "metrics": {
      "temp1": { "generator": "sine1", "name": "value" },
      "temp2": { "generator": "sine2", "name": "value" }
     }
    },
    "t2/data": {
     "metrics": {
      "temp1": { "generator": "sine1", "name": "value" },
      "temp2": { "generator": "sine2", "name": "value" }
     }
    }
   },
   "generators": {
    "sine1": {
     "type": "sine", "period": 60000, "offset": 50, "amplitude": 100
    },
    "sine2": {
     "type": "sine", "period": 120000, "shift": 45.5, "offset": 30, "amplitude": 100
    },
    "spos": {
     "type": "spos"
    }
   }
  }
 }
}

For more details about this model see: Simple simulation model in the Kapua User Manual.

And of course this can also be managed with the OpenShift setup. Loading a JSON file works like this:

oc create configmap data-simulator-config --from-file=KSIM_SIMULATION_CONFIGURATION=../src/test/resources/example1.json
oc set env --from=configmap/data-simulator-config dc/simulator

Finally it is now possible to visually inspect this data with Grafana, directly accessing the Elasticsearch storage:

Released version 0.1.0 of OPC UA component for Camel

After Eclipse Milo™ 0.1.0 was released a few days back and is available on Maven Central since this week it was time to update my OPC UA component for Apache Camel to use the release version of Milo:

This means that there is now a released version of, available on Maven Central as well, of the Apache Camel Milo component which can either be used standalone or dropped in directly to some OSGi container like Apache Karaf.

The basics

The component is available from Maven Central under the group ID de.dentrassi.camel.milo and the source code is available on GitHub: ctron/de.dentrassi.camel.milo

For more details also see: Apache Camel component for OPC UA

If you want to use is as a dependency use:


  de.dentrassi.camel.milo
  camel-milo
  0.1.0

Or for the Apache Karaf feature:

mvn:de.dentrassi.camel.milo/feature/0.1.0/xml/features

Plain Java

If you want to have a quick example you can clone the GitHub repository and simply compile and run an example using the following commands:

git clone https://github.com/ctron/de.dentrassi.camel.milo
cd de.dentrassi.camel.milo/examples/milo-example1
mvn camel:run

This will compile and run a simple example which transfers all temperate measurements from the iot.eclipse.org MQTT server from the topic javaonedemo/eclipse-greenhouse-9home/sensors/temperature to the OPC UA tag item-GreenHouse.Temperature, namespace urn:org:apache:camel on the connection opc.tcp://localhost:12685.

The project is a simple OSGi Blueprint bundle which can be also be run by Apache Camel directly. The only configuration is the blueprint file:

<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">

    <bean id="milo-server" class="org.apache.camel.component.milo.server.MiloServerComponent">
        <property name="enableAnonymousAuthentication" value="true"/>
    </bean>

    <camelContext xmlns="http://camel.apache.org/schema/blueprint">
      <route id="milo1">
        <from uri="paho:javaonedemo/eclipse-greenhouse-9home/sensors/temperature?brokerUrl=tcp://iot.eclipse.org:1883"/>
        <convertBodyTo type="java.lang.String"/>
        <log message="iot.eclipse.org - temperature: ${body}"/>
        <to uri="milo-server:GreenHouse.Temperature"/>
      </route>
    </camelContext>

</blueprint>

This configures a Camel Milo server component and routes the data from MQTT to OPC UA.

Apache Karaf

If you compile the previous example using:

mvn package

You can download and start an Apache Karaf instance, add the Camel Milo component as a feature and deploy the bundle:

feature:repo-add mvn:de.dentrassi.camel.milo/feature/0.1.0/xml/features
feature:repo-add mvn:org.apache.camel.karaf/apache-camel/2.18.0/xml/features
feature:install aries-blueprint shell-compat camel camel-blueprint camel-paho camel-milo

The next step will download and install the example bundle. If you did compile this yourself, then use the
path of your locally compiled JAR. Otherwise you can also use a pre-compiled example bundle:

bundle:install -s https://dentrassi.de/download/camel-milo/milo-example1-0.1.0-SNAPSHOT.jar

To check if it works you can cannot using an OPC UA client or peek into the log file of Karaf:

karaf> log:tail
2017-01-11 15:11:45,348 | INFO  | -930541343163004 | milo1  | 146 - org.apache.camel.camel-core - 2.18.0 | iot.eclipse.org - temperature: 21.19
2017-01-11 15:11:45,958 | INFO  | -930541343163004 | milo1  | 146 - org.apache.camel.camel-core - 2.18.0 | iot.eclipse.org - temperature: 21.09
2017-01-11 15:11:49,648 | INFO  | -930541343163004 | milo1  | 146 - org.apache.camel.camel-core - 2.18.0 | iot.eclipse.org - temperature: 21.19

FUSE tooling

If you want some more IDE integration you can quickly install the JBoss FUSE tooling and connect via JMX to either the Maven controlled instance (mvn camel:run) or the Karaf instance and monitor, debug and trace the active Camel routes:

FUSE tooling with Milo
FUSE tooling with Milo

What is next?

For one this component will hopefully become part of Apache Camel itself. And of course there is always something to improve ;-)

I also did update the Kura Addon for Milo, which provides the Milo Camel component for Eclipse Kura 2.1.0 which was recently released. This component is now also available on Maven Central and can easily be deployed into Kura. See the Kura Addons page for more information.

Then there are a few location where I used SNAPSHOT versions of Milo and for some I did promise an update. So I will try to update as many locations as I can with links to the released version of those components.

Dropping Apache File Install into Eclipse Kura

Sometimes the simple things may be the most valuable. Testing with Eclipse Kura™ on a Raspberry Pi (or any other Eclipse Kura device) may be a bit tricky. Of course can use the Eclipse UI in combination with mToolkit. But if you want to edit, compile, deploy from a local desktop machine, to a Kura device, then you either need to click through the Web UI for uploading your application. But for this to work you also need to assembly a DP (distribution package).

But what if you could simply drop an OSGi bundle into a directory and let it get picked up by Kura automatically. Thanks to Apache File Install, there already is such a solution. File Install scans a folder and loads every OSGi bundle located in this folder. If a bundle is started and it gets overwritten in the file system, then File Install will reload and restart the bundle.

So deploying and re-deploying to a Kura device is as easy a copying a file to your target with SCP (or the remote copy tool of your choice).

And installing Apache File Install into Eclipe Kura now just got a bit simpler.

Using Maven Central

Simply navigate to the Packages section of the Eclipse Kura Web UI. Press the “Install” button and choose “URL” and enter the following URL:

https://repo1.maven.org/maven2/de/dentrassi/kura/addons/de.dentrassi.kura.addons.utils.fileinstall/0.1.0/de.dentrassi.kura.addons.utils.fileinstall-0.1.0.dp

After confirming using the Submit button it will take a bit and then File Install will be installed into your Kura installation. Sometimes it takes as bit longer than Kura expects and you need to reload the Web UI (Ctrl-R) until Kura has performed the installation.

Using the Eclipse Marketplace

Currently the Eclipse Marketplace is focused on hosting plugins for the Eclipse IDE, but this should change rather soon. But still it is possible right now to drag and drop Apache File Install into Eclipse Kura with the use of the following install button:

Drag to your running Eclipse workspace to install Apache File Install for Eclipse Kura

Dragging this button into the Kura Web UI will bring up a confirmation dialog if you want to install the addon. After confirming it will go and fetch the DP and install Apache File Install into the running Kura instance.

Deploying bundles

Now it is time to deploy some bundles. By default the directory where Apache File Install looks for bundles is /opt/eclipse/kura/load. At first this directory will not exist, so it has to be created. Next we simply fetch and example bundle using wget:

$ cd /opt/eclipse/kura
$ mkdir load
$ wget "https://dentrassi.de/download/kura/org.eclipse.kura.example.camel.publisher-1.0.0.jar"

This is an example publisher bundle from the upcoming Kura 2.1.0 release. So if you are still using Kura 2.0, then you can either try a different OSGi bundle, or maybe give Kura 2.1 a try ;-)

Due to a regression in Kura (eclipse/kura#123) there is currently no way to manually start OSGi bundles. So you need to stop Kura and start the local command console (/opt/eclipse/kura/bin/start_kura.sh). On the OSGi shell you can then:

osgi> ss example
"Framework is launched."


id	State       Bundle
116	RESOLVED    org.eclipse.kura.example.camel.publisher_1.0.0
osgi> start 116
osgi> ss example
"Framework is launched."


id	State       Bundle
116	ACTIVE      org.eclipse.kura.example.camel.publisher_1.0.0
osgi>

Now once this initial activate has been performed, File Install and OSGi will keep the bundle active. You can re-deploy this OSGi bundle by simply copying a new version over the old one. File Install will detect the change and refresh the bundle.

There is more…

Apache File Install can also update OSGi configurations and it can be configured using a set of system properties. For the full set of options check out the Apache File Install documentation.

New version of Maven RPM builder

I just released a new version of the Maven RPM builder. Version 0.6.0 allows one to influence the way the RPM release information is generated during a SNAPSHOT build (also see issue #2).

While the default behavior is still the same, it is now possible to specify the snapshotBuildId, which will then be added as release suffix instead of the current timestamp. Setting forceRelease can be used to disable the SNAPSHOT specific logic altogether and just use the provided release information.

Writing RPM files … in plain Java … on Maven Central

A few weeks back I wrote a blog post about writing RPM files in plain Java.

What was left over was the fact that the library was not available outside of Package Drone itself. Although it was created as a stand alone functionality you would need to fetch the JAR and somehow integrate it into your build.

With the recent release of Package Drone 0.13.0 I was finally able to officially push the module to Maven Central.

[code language=”xml”]
<dependency>
<groupId>org.eclipse.packagedrone</groupId>
<artifactId>org.eclipse.packagedrone.utils.rpm</artifactId>
<version>0.13.0</version>
</dependency>
[/code]

In the meanwhile I did work on a Maven RPM builder plugin, which allows creating RPM files on any platform. The newest version (0.5.0) has been released today as well, which already uses the new library.

So working with RPM files just got a bit easier ;-)