Eclipse

50 posts

Development for and with Eclipse

Just a bit of Apache Camel

Sometimes you write something and then you nearly forget that you did it … although it is quite handy sometimes, here are a few lines of Apache Camel XML running in Eclipse Kura:

I just wanted to publish some random data from Kura to Kapua, without the need to code, deploy or build anything. Camel came to the rescue:

<routes xmlns="http://camel.apache.org/schema/spring">
  <route id="route1">

    <from uri="timer:1"/>
    <setBody><simple>${bean:payloadFactory.create("value", ${random(100)})}</simple></setBody>
    <to uri="kura-cloud:myapp/topic"/>

  </route>
</routes>

Dropping this snippet into the default XML Camel router:

  • Registers a Kura application named myapp
  • Creates a random number between 0 and 100 every second
  • Converts this to the Kura Payload structure
  • And publishes it on the topic topic

Google Summer of Code 2017 with Eclipse Kapua

I am happy to announce that Eclipse Kapua got two slots in this year’s Google Summer of Code. Yes, two projects got accepted, and both are for the Eclipse Kapua project.

Anastasiya Lazarenko will provide a simulation of a fish tank and Arthur Deschamps will go for a supply chain simulation. Both simulations are planned to feed in their data into Eclipse Kapua using the Kura simulator framework. Although both projects seem to be quite similar from a high level perspective, I think they are quite different when it comes to the details.

The basic idea is not to provide something like a statistically/physically/… valid simulation, but something to play around and interact with. Spinning up a few virtual instances of both models and hooking them up to our cloud based IoT solution and interact a bit with them, getting some reasonable feedback values.

For Kapua this will definitely mean evolving the simulator framework based on the feedback from both students, making it (hopefully) easier to use for other tasks. And maybe, just maybe, we can also got for the extra mile and make the same simulations available for Eclipse Hono.

If you want to read more about Anastasiya and Arthur just read through their introductions on kapua-dev@eclipse.org and give them a warm welcome:

read Anastasiya’s introduction
read Arthur’s introduction

Best of luck to you!

OPC UA with Apache Camel

Apache Camel 2.19.0 is close to is release and the OPC UA component called “camel-milo” will be part of it. This is my Eclipse Milo backed component which was previously hosted in my personal GitHub repository ctron/de.dentrassi.camel.milo. It now got accepted into Apache Camel and will be part of the 2.19.0 release. As there are already a release candidates available, I think it is a great time to give a short introduction.

In a nutshell OPC UA is an industrial IoT communication protocol for acquiring telemetry data and command and control of industrial grade automation systems. It is also known as IEC 62541.

The Camel Milo component offers both an OPC UA client (milo-client) and server (milo-server) endpoint.

Running an OPC UA server

The following Camel example is based on Camel Blueprint and provides some random data over OPC UA, acting as a server:

Example project layout
Example project layout

The blueprint configuration would be:

<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="
	http://www.osgi.org/xmlns/blueprint/v1.0.0 https://osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd
	http://camel.apache.org/schema/blueprint https://camel.apache.org/schema/blueprint/camel-blueprint.xsd
	">

	<bean id="milo-server"
		class="org.apache.camel.component.milo.server.MiloServerComponent">
		<property name="enableAnonymousAuthentication" value="true" />
	</bean>

	<camelContext xmlns="http://camel.apache.org/schema/blueprint">
		<route>
			<from uri="timer:test" />
			<setBody>
				<simple>random(0,100)</simple>
			</setBody>
			<to uri="milo-server:test-item" />
		</route>
	</camelContext>

</blueprint>

And adding the following Maven build configuration:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

	<modelVersion>4.0.0</modelVersion>

	<groupId>de.dentrassi.camel.milo</groupId>
	<artifactId>example1</artifactId>
	<version>0.0.1-SNAPSHOT</version>
	<packaging>bundle</packaging>

	<properties>
		<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
		<camel.version>2.19.0</camel.version>
	</properties>

	<dependencies>
		<dependency>
			<groupId>ch.qos.logback</groupId>
			<artifactId>logback-classic</artifactId>
			<version>1.2.1</version>
		</dependency>
		<dependency>
			<groupId>org.apache.camel</groupId>
			<artifactId>camel-core-osgi</artifactId>
			<version>${camel.version}</version>
		</dependency>
		<dependency>
			<groupId>org.apache.camel</groupId>
			<artifactId>camel-milo</artifactId>
			<version>${camel.version}</version>
		</dependency>
	</dependencies>

	<build>
		<plugins>
			<plugin>
				<groupId>org.apache.felix</groupId>
				<artifactId>maven-bundle-plugin</artifactId>
				<version>3.3.0</version>
				<extensions>true</extensions>
			</plugin>
			<plugin>
				<groupId>org.apache.camel</groupId>
				<artifactId>camel-maven-plugin</artifactId>
				<version>${camel.version}</version>
			</plugin>
		</plugins>
	</build>

</project>

This allows you to simply run the OPC UA server with:

mvn package camel:run

Afterwards you can connect with the OPC UA client of your choice and subscribe to the item test-item, receiving that random number.

Release candidate

As this is currently the release candidate of Camel 2.19.0, it is necessary to add the release candidate Maven repository to the pom.xml. I did omit
this in the example above, as this will no longer be necessary when Camel 2.19.0 is released:

<repositories>
		<repository>
			<id>camel</id>
			<url>https://repository.apache.org/content/repositories/orgapachecamel-1073/</url>
		</repository>
	</repositories>

	<pluginRepositories>
		<pluginRepository>
			<id>camel</id>
			<url>https://repository.apache.org/content/repositories/orgapachecamel-1073/</url>
		</pluginRepository>
	</pluginRepositories>

It may also be that the URLs (marked above) will change as a new release candidate gets built. In this case it is necessary that you update the URLs to the appropriate
repository URL.

What’s next?

Once Camel 2.19.0 is released, I will also mark my old, personal GitHub repository as deprecated and point people towards this new component.

And of course I am happy to get some feedback and suggestions.

Simulating telemetry streams with Kapua and OpenShift

Sometimes it is necessary to have some simulated data instead of fancy sensors attached to your IoT setup. As Eclipse Kapua starts to adopt Elasticsearch, it started to seem necessary to actually unit test the inbound telemetry stream of Kapua. Data coming from the gateway, being processed by Kapua, then stored into Elasticsearch and then retrieved back from Elasticsearch over the Kapua REST API. A lot can go wrong here ;-)

The Kura simulator, which is now hosted in the Kapua repository, seemed to be right place to do this. That way we can not only test this inside Kapua, but we can also allow different use cases for simulating data streams outside of unit tests and we can leverage the existing OpenShift integration of the Kura Simulator.

The Kura simulator has the ability now to also send telemetry data. In addition to that there is a rather simple simulation model which can use existing value generators and map those to a more complex metric setup.

From a programmatic perspective creating a simple telemetry stream would look this:

GatewayConfiguration configuration = new GatewayConfiguration("tcp://kapua-broker:kapua-password@localhost:1883", "kapua-sys", "sim-1");
try (GeneratorScheduler scheduler = new GeneratorScheduler(Duration.ofSeconds(1))) {
  Set apps = new HashSet<>();
  apps.add(simpleDataApplication("data-1", scheduler, "sine", sine(ofSeconds(120), 100, 0, null)));
  try (MqttAsyncTransport transport = new MqttAsyncTransport(configuration);
       Simulator simulator = new Simulator(configuration, transport, apps);) {
      Thread.sleep(Long.MAX_VALUE);
  }
}

The Generators.simpleDataApplication creates a new Application from the provided map of functions (Map<String,Function<Instant,?>>). This is a very simple application, which reports a single metric on a single topic. The Generators.sine function returns a function which creates a sine curve using the provided parameters.

Now one might ask, why is this a Function<Instant,?>, wouldn’t a simple Supplier be enough? There is a good reason for that. The expectation of the data simulator is actually that the telemetry data is derived from the provided timestamp. This is done in order to generate predictable timestamp and values along the communication path. In this example we only have a single metric in a single instance. But it is possible to scale up the simulation to run 100 instances on 100 pods in OpenShift. In this case each simulation step in one JVM would receive the same timestamp and this each of those 100 instances should generate the same values. Sending the same timestamps upwards to Kapua. Now validating this information later on because quite easy, as you not only can measure the time delay of the transmission, but also check if there are inconsistencies in the data, gaps or other issues.

When using the SimulationRunner, it is possible to configure data generators instead of coding:

{
 "applications": {
  "example1": {
   "scheduler": { "period": 1000 },
   "topics": {
    "t1/data": {
     "positionGenerator": "spos",
     "metrics": {
      "temp1": { "generator": "sine1", "name": "value" },
      "temp2": { "generator": "sine2", "name": "value" }
     }
    },
    "t2/data": {
     "metrics": {
      "temp1": { "generator": "sine1", "name": "value" },
      "temp2": { "generator": "sine2", "name": "value" }
     }
    }
   },
   "generators": {
    "sine1": {
     "type": "sine", "period": 60000, "offset": 50, "amplitude": 100
    },
    "sine2": {
     "type": "sine", "period": 120000, "shift": 45.5, "offset": 30, "amplitude": 100
    },
    "spos": {
     "type": "spos"
    }
   }
  }
 }
}

For more details about this model see: Simple simulation model in the Kapua User Manual.

And of course this can also be managed with the OpenShift setup. Loading a JSON file works like this:

oc create configmap data-simulator-config --from-file=KSIM_SIMULATION_CONFIGURATION=../src/test/resources/example1.json
oc set env --from=configmap/data-simulator-config dc/simulator

Finally it is now possible to visually inspect this data with Grafana, directly accessing the Elasticsearch storage:

Developing for Eclipse Kura on Windows

Every now and then it is fun to leave the environment you are used to and do something completely different. So this journey take me to IntelliJ and Windows 10. And yes, I am glad to be back in Linux/Eclipse-land. But still, I think something rather interesting came out of this.

It all started when I helped my colleague Aurélien Pupier to get his environment ready for his talk at the Eclipse IoT day Grenoble. If you missed this talk, you can watch a recording of it. He wanted to present the Camel Developer Tools. The problem was the he was working on a Windows laptop. And he wanted to demonstrate Eclipse Kura in combination JBoss Tools IDE. However Kura can only run on Linux and he wanted to run the JBoss Tools native on his Windows machine.

Of course you could come up with some sort of Virtual Machine setup, but we wanted something which was easier to re-produce in the case there would be some issue with the laptop for the presentation.

Creating a docker image of Kura

The first step was to create a docker image of Kura. Currently Kura doesn’t offer any support for Docker. So that had to be created from scratch. As there is even no x86_64 distribution of Kura and no emulator distribution, it was necessary to do some rather unusual hacks. The background is, that Kura has a rather crude build system which assembles a few distributions in the end of the build. Kura also requires some hardware interfaces in order to work properly. For those hardware interfaces there exist emulator replacements for using in a local developer setup. However there is neither a distribution assembly for x86_64 nor one using the emulator replacements. The whole build system in the end is focused around creating Linux-only images. The solution was to simply rip out all functionality which was in the way and create a patch file.

This patch file and the docker build instructions are now located at a different repository where I can easily modify those and hook it up to the DockerHub build system: ctron/kura-emulator. Currently there are three tags for docker images in the Kura Emulator DockerHub repository: latest (which is the most recent, but stable release), 3.0.0-RC1 and develop (most recent, less stable). As there is currently no released version of Kura 3.0.0, the latest tag is also using the develop branch of Kura. The 3.0.0-RC1 tag is a stable version of the emulator which is known to work and won’t be updated in the future.

There is a more detailed README file in the GitHub repository which explains how to use and build the emulator yourself. In a nutshell you can start it with:

docker run -ti -p 8080:8080 ctron/kura-emulator

And afterwards you can navigate with your browser to http://localhost:8080 and use the Kura Web UI.

As Docker is also available for Windows, this will work the same way on either Linux or Windows, and although I didn’t test it, it should also work on Mac OS X.

JMX & Debugging

As the Camel tooling makes use of JMX, it was necessary to also enable JMX support for Kura, which normally is not available with Kura. By setting the JAVA_OPTS environment variable it is not only possible to enable JMX, but also to enable plain Java debugging for the docker image. Of course you will need to publish the selected ports with -p when running the docker image. And for Windows you cannot simply use localhost but you will need to use the IP addresses created by docker for windows: also see README.md.

Drop in & activate

After the conference was over, I started to think about what we actually had achieved by doing all this. We had a read-to-run Kura image, dockerized, capable of running of Windows (with docker), debuggable. The only part which was still missing was the ability to add a new, custom bundle to the emulator.

Apache Felix File Install to the rescue! In the past I created an Apache Felix File Install DP for Kura (DP = deployment package for Kura). File Install works in a way that it monitors a directory and automatically loads, unloads and updates an OSGi JAR file which you drop into this directory. The DP can simply be dropped into Kura, which extends Kura with this File Install functionality.

So I pre-seeded the Kura docker image with the File Install DP and declared a volume mount, so that you can simply mount a path of the docker image onto your host system. Dropping a file into the directory on the host system will make it available to the docker container and File Install will automatically pick it up and start it, but inside the docker container.

docker run -ti -p 8080:8080 -v c:/path/to/bundles:/opt/eclipse/kura/load ctron/kura-emulator

And this even works with Docker for Windows, if you share your drive first:

Share drive with Docker

Choose your tools

Currently Kura requires you to use a rather complicated setup for developing applications for Kura. You will need to learn about Eclipse PDE, target platforms, Tycho for Maven and bunch of other things to get your Kura application developed, built and packaged.

I already created a GitHub repository for showing a different way to develop Kura applications: ctron/kura-examples. Those project use plain maven, the maven-bundle-plugin and my osgi-dp plugin to create the final DP. Those examples also make use of the newer OSGi annotations instead of requiring your to craft all OSGi metadata by hand.

So if you wanted, you could already use your favorite IDE and start developing Kura application with style. But in order to run them, you still needed a Kura device. But with this docker image you can now simply let the emulator run and let File Install pick up the compiled results:

Summary

So yes, it is possible to use IntelliJ on Windows to develop and debug your Kura application, in a stylish fashion. Or you can simply do the same, just using an excellent IDE like Eclipse and an awesome operating system like Linux, with the same stylish approach ;-)

IEC 60870-5-104 with Apache Camel

Yesterday the release 0.4.0 of Eclipse NeoSCADA™ was made available. This release features a cool new feature, an IEC 60870-5-104 stack, written in Java, licensed under the EPL and available on Maven Central. See also the Eclipse Wiki: https://wiki.eclipse.org/EclipseNeoSCADA/Components/IEC60870

So it was time to update my Apache Camel component for IEC 60870 and finally release it to Maven Central with proper dependencies on Eclipse NeoSCADA 0.4.0.

For more information about the see my page about the IEC 60870 Apache Camel component.

In a nutshell you can install it with the following commands into a running Karaf container and start using it with Apache Camel:

feature:repo-add mvn:org.apache.camel.karaf/apache-camel/2.18.0/xml/features
feature:repo-add mvn:de.dentrassi.camel.iec60870/feature/0.1.1/xml/features
feature:install camel-iec60870

But of course it can also be used outside of OSGi. In a standalone Java application or in the various other ways you can use Apache Camel.

Testing Kapua with simulated Kura gateways

Now you got your pretty new OpenShift setup of Eclipse Kapua and want to give your IoT cloud a test run?! Testing it out with 100 devices, just for fun? Or even more? But you are too lazy to flash 1000 SD cards for your Raspberry Pi cluster? Here comes the Kura simulator framework. ;-)

In order to provide some automatic testing for Kapua I started working on a simulator framework which does simulate Kura instances completely in Java. No backend needed, no hardware needed, able to run multiple instances in a single JVM. And all hosted on GitHub at ctron/kura-simulator.

A screenshot of Kura simulator instances in Kapua
Kura simulator instances in Kapua

The basic idea was to create a set of classes which can be used in automated unit tests in order to simulate a Kura gateway, but allow for a finer grained control over it for testing the good, the bad and the ugly. A real Kura instance would of course be a more realistic test partner, but then again this would have quite a few drawbacks. First of all, Kura cannot be embedded into a unit or integration test. It has far too many dependencies to directory structures, command line utilities, native libraries and it would also require an OSGi container to be started. Second, Kura would always behave like Kura. Now for some tests this may be fine, but if you want to test corner cases where the gateway responds in a way which is not expected by Kapua, then this cannot be done with Kura.

So running a single Kura simulator can be as easy as:

ScheduledExecutorService downloadExecutor = 
   Executors.newSingleThreadScheduledExecutor(new NameThreadFactory("DownloadSimulator"));

GatewayConfiguration configuration =
   new GatewayConfiguration("tcp://kapua-broker:kapua-password@localhost:1883", "kapua-sys", "sim-1");

Set<Application> apps = new HashSet<>();
apps.add(new SimpleCommandApplication(s -> String.format("Command '%s' not found", s)));
apps.add(AnnotatedApplication.build(new SimpleDeployApplication(downloadExecutor)));

try (MqttSimulatorTransport transport = new MqttSimulatorTransport(configuration);
     Simulator simulator = new Simulator(configuration, transport, apps);) {
    Thread.sleep(Long.MAX_VALUE);
    logger.info("Bye bye...");
} finally {
  downloadExecutor.shutdown();
}

Of course, scaling this up and running a few more instances of this isn’t a big deal either. Running this in a docker container and scaling this up even more with OpenShift works fine as well. So testing any number of Gateways just became a lot easier.

Currently the simulator can emulate the command service (V1) and most of the deploy service (V2). The configuration service is still missing, but should get implemented in the next few days. Of course it is also possible to register a custom application and provide some metrics yourself.

Camel and IEC 60870-5-104

With the upcoming release 0.4.0 of Eclipse NeoSCADA™, the IEC 60870-5-105 implementation will finally make its way back into NeoSCADA. This will allow me to finally release the IEC 60870 component Apache Camel to Maven Central.

The Camel components for IEC 60870 are based on the NeoSCADA implementation and provide both client and server side of the protocol. Although the implementation of IEC 60870 does not implement all message types defined, all relevant types for data transmission and control are implementation and other modules can be added by an extensible mechanism, using the core layers of the protocol.

For Camel there are two endpoint types iec60870-server and iec60870-client. These allow either to offer data as IEC 60870 or to actively request data from another 60870 server.

The client component will open a connection to the remote station and initiate the data transmission. 60780 will then send updates for all addresses but the Camel component will only forward events to connected endpoints. When the connection breaks, it will be periodically tried to re-establish the connection. All event coming from the IEC connection can of course be processed with Camel.

For the server side the Camel component will hold an intern data model which can be filled using the Camel routes. In internal state will then be published to IEC clients connecting to the server instance. It also allows the use of background transmission or batching of events when required.

Now what can you actually do with IEC 60870 and Apache Camel? Well, to be honest, if you never have heard about IEC 60870 and don’t have a proper use case or specific requirement for it, then you should probably look for something different to play with ;-) IEC 60870 is used to remotely control and monitor electrical systems and power control systems (see Wikipedia page about IEC 60870-5). On the other hand, if you do want to use 60870, then the Apache Camel component can make it pretty easy to provide a data over the IEC protocol or get data out of an 60870 based system.

As routing data with Camel is easy, you can for example create a very simple Mock device in a Raspberry Pi for testing your system with an IEC component. And you can do all of this with pure open source (EPL licensed) software. You can also extract data out of your application and offer it towards another system, which explicitly requires a transmission based on IEC 60870.

When the component will be released on the next few weeks I will hopefully find the time to provide some example, showing what you can do with IEC 60870 and Apache Camel.

Released version 0.1.0 of OPC UA component for Camel

After Eclipse Milo™ 0.1.0 was released a few days back and is available on Maven Central since this week it was time to update my OPC UA component for Apache Camel to use the release version of Milo:

This means that there is now a released version of, available on Maven Central as well, of the Apache Camel Milo component which can either be used standalone or dropped in directly to some OSGi container like Apache Karaf.

The basics

The component is available from Maven Central under the group ID de.dentrassi.camel.milo and the source code is available on GitHub: ctron/de.dentrassi.camel.milo

For more details also see: Apache Camel component for OPC UA

If you want to use is as a dependency use:


  de.dentrassi.camel.milo
  camel-milo
  0.1.0

Or for the Apache Karaf feature:

mvn:de.dentrassi.camel.milo/feature/0.1.0/xml/features

Plain Java

If you want to have a quick example you can clone the GitHub repository and simply compile and run an example using the following commands:

git clone https://github.com/ctron/de.dentrassi.camel.milo
cd de.dentrassi.camel.milo/examples/milo-example1
mvn camel:run

This will compile and run a simple example which transfers all temperate measurements from the iot.eclipse.org MQTT server from the topic javaonedemo/eclipse-greenhouse-9home/sensors/temperature to the OPC UA tag item-GreenHouse.Temperature, namespace urn:org:apache:camel on the connection opc.tcp://localhost:12685.

The project is a simple OSGi Blueprint bundle which can be also be run by Apache Camel directly. The only configuration is the blueprint file:

<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">

    <bean id="milo-server" class="org.apache.camel.component.milo.server.MiloServerComponent">
        <property name="enableAnonymousAuthentication" value="true"/>
    </bean>

    <camelContext xmlns="http://camel.apache.org/schema/blueprint">
      <route id="milo1">
        <from uri="paho:javaonedemo/eclipse-greenhouse-9home/sensors/temperature?brokerUrl=tcp://iot.eclipse.org:1883"/>
        <convertBodyTo type="java.lang.String"/>
        <log message="iot.eclipse.org - temperature: ${body}"/>
        <to uri="milo-server:GreenHouse.Temperature"/>
      </route>
    </camelContext>

</blueprint>

This configures a Camel Milo server component and routes the data from MQTT to OPC UA.

Apache Karaf

If you compile the previous example using:

mvn package

You can download and start an Apache Karaf instance, add the Camel Milo component as a feature and deploy the bundle:

feature:repo-add mvn:de.dentrassi.camel.milo/feature/0.1.0/xml/features
feature:repo-add mvn:org.apache.camel.karaf/apache-camel/2.18.0/xml/features
feature:install aries-blueprint shell-compat camel camel-blueprint camel-paho camel-milo

The next step will download and install the example bundle. If you did compile this yourself, then use the
path of your locally compiled JAR. Otherwise you can also use a pre-compiled example bundle:

bundle:install -s https://dentrassi.de/download/camel-milo/milo-example1-0.1.0-SNAPSHOT.jar

To check if it works you can cannot using an OPC UA client or peek into the log file of Karaf:

karaf> log:tail
2017-01-11 15:11:45,348 | INFO  | -930541343163004 | milo1  | 146 - org.apache.camel.camel-core - 2.18.0 | iot.eclipse.org - temperature: 21.19
2017-01-11 15:11:45,958 | INFO  | -930541343163004 | milo1  | 146 - org.apache.camel.camel-core - 2.18.0 | iot.eclipse.org - temperature: 21.09
2017-01-11 15:11:49,648 | INFO  | -930541343163004 | milo1  | 146 - org.apache.camel.camel-core - 2.18.0 | iot.eclipse.org - temperature: 21.19

FUSE tooling

If you want some more IDE integration you can quickly install the JBoss FUSE tooling and connect via JMX to either the Maven controlled instance (mvn camel:run) or the Karaf instance and monitor, debug and trace the active Camel routes:

FUSE tooling with Milo
FUSE tooling with Milo

What is next?

For one this component will hopefully become part of Apache Camel itself. And of course there is always something to improve ;-)

I also did update the Kura Addon for Milo, which provides the Milo Camel component for Eclipse Kura 2.1.0 which was recently released. This component is now also available on Maven Central and can easily be deployed into Kura. See the Kura Addons page for more information.

Then there are a few location where I used SNAPSHOT versions of Milo and for some I did promise an update. So I will try to update as many locations as I can with links to the released version of those components.

Remote managing Eclipse Kura on Apache Karaf with ECF

To be honest, I had my troubles in the past with the Eclipse Communication Framework (ECF), not that it is a bad framework, but whatever I started it was complicated and never really worked for me. This story is different!

A few months back the Eclipse Kura project ran into an issue that the plugin which was being used for remote managing a Kura instance (mToolkit) from an IDE just kind of went away (issue #496). There is some workaround for that now, but still the problems around mToolkit still exists. Beside the fact that it is no longer maintained, it is also rather buggy. Deploying a single bundle takes about a minute for me. Of course using the Apache File Install package for Kura would also help here ;-)

But having a decent IDE integration would also be awesome. So when Scott Lewis from the ECF project contacted me about that, I was ready to give it a try. Unfortunately the whole setup required more than Kura could handle at that time. But now we do have support for Java 8 in Kura and there also is some basic support for running Kura on Karaf, including a docker image with the Kura emulator running on Karaf.

So I asked Scott for some help in getting this up and running and the set of instructions was rather short. In the following examples I am assuming your are running RHEL 7, forgive me if you are not ;-)

First we need to spin up a new Kura emulator instance:

sudo docker run -ti --net=host ctron/kura:karaf-stable

We are mapping all network to the host instance, since we are using another port, which is not configured in the upstream Dockerfile. There is probably another way, but this is just a quick example.

Then, inside the Karaf instance install ECF. We configure it first to use “ecftcp” instead of MQTT. But of course you can also got with MQTT or some other adapter ECF provides:

property -p service.exported.configs ecf.generic.server
property -p ecf.generic.server.id ecftcp://localhost:3289/server

feature:repo-add http://download.eclipse.org/rt/ecf/kura.20161206/karaf4-features.xml
feature:install -v ecf-kura-karaf-bundlemgr

Now Kura is read to go. Following up in the Eclipse IDE, you will need Neon running on Java 8:

Add the ECF 3.13.3 P2 repository using http://download.eclipse.org/rt/ecf/3.13.3/site.p2 and install the following components:

  • ECF Remote Services SDK
  • ECF SDK for Eclipse

Next install the preview components for managing Karaf with ECF. Please note, those components are previews and may or may not be release at some point in the future. Add the following P2 repository: http://download.eclipse.org/rt/ecf/kura.20161206/eclipseui and install the following components (disable Group Items by Category):

  • Remote Management API
  • Remote Management Eclipse Consumer

Now comes the fiddly part, this UI is a work in progress, and you have been warned, but it works:

  • Switch to the Remote Services perspective
  • Open a new view: Window -> Show View -> Other… – Select Remote OSGi Bundles
  • Click one of the green + symbols (choose either MQTT or ECFTCP) and enter the address of your Karaf instance (localhost and 3289 for me)

You should already see some information about that target device now. But when you open a new view (as before) named Karaf Features you will also have the ability to tinker around with the Karaf installation.

If you just want to have a quick look, here it is:

ECF connecting to Kura on Karaf
ECF connecting to Kura on Karaf

Of course you don’t need to use an IDE for managing Karaf. But having such an integration as an option, is a nice addition. And it shows how powerful a great OSGi setup can be ;-)