OSGi

21 posts

CEP & Machine learning for IoT โ€“ Drools on Kura

Machine learning and predicate maintenance we are the role models of IoT use cases. Having an IoT gateway allows you to pre-process data before you send it upstream to your cloud. It also allows you to do local decisions, without the actual need for a cloud upload. But as you know from sitting at your favorite restaurant, staring at the menu, making decisions can be quite hard ๐Ÿ˜‰ Complex event process and machine learning models can help you with IoT use cases though.

Eclipse Kura is an open source IoT gateway with a focus on industrial use cases. Drools is an open source rule engine and, with Drools Fusion, provides complex event processing. It also supports making decisions based on Predictive Model Markup Language (PMML) based models. So why not bring both components together?! PMML’s can be used for all kind of scenarios. But the classic IoT use case would probably be to do predicate maintenance, based on a PMML generated by your machine learning solution on the cloud. Sending back the “learned” knowledge to the edge gateway for local processing.

Installation

The Drools addon for Kura provide two DPs (the package type used by Kura) which can be deployed in order to extend Kura with Drools.

Note: If you don’t have a Raspberry Pi at hand, or don’t want to install Kura on some “real” device. You can always use the Kura Emulator docker image (see also Developing for Eclipse Kura in Windows).

Setup

Once the components are installed, it is possible to create a new Drools instance by clicking in the blue “+” on the left side of the Kura services area. Create a new component of the type de.dentrassi.kura.addons.drools.component.DroolsInstance. Choose any unused ID and click “Apply”. It might be necessary to reload the Web UI of Kura at this point as the refresh doesn’t properly work. When the service is listed in the left hand side “services” list, select it in order to configure:

The actual rules document comes from the file SimpleScorecard.pmml of the PMML drools example mpbravo/brms-pmml-example. Also be sure to set the file type to “Predictive Model Markup Language”. Save the changes by clicking on “Apply”.

Next we will use Kura Wires in order to mesh up the model with some “data”. A need to use a timer as input source, as Kura currently doesn’t offer any kind of value creating like a function or sine curve. So create a new “timer”, you can leave the default of 10 seconds. Add a new logger, which we simply use for testing, you should set the “verbosity” to “VERBOSE”. And then create a new “DroolsProcess” component with the following configuration:

ID
pmml1 โ€“ The ID of the drools session
Fire all rules
true โ€“ After the fact has been injected, rules have to fired
Delete after fire
true โ€“ After the rules have been fired, we can remove the fact from the session
Fact Package
org.drools.scorecards.example โ€“ The package name, from the rules model
Fact Type
SampleScore โ€“ The type name, from the rules model
Inputs
age=TIMER โ€“ comma separated list for Wire record names to fact object properties
Outputs
result=scorecard_calculatedScore โ€“ comma separated list for Wire record names to fact object properties

Finally wire that all up:

Results

Looking at the Kura log file /var/log/kura.log should show you something like (where result is coming from the PMML model):

2018-03-15 16:07:12,165 [DefaultQuartzScheduler_Worker-6] INFO  d.d.k.a.d.c.w.DroolsProcess - Result - type: class java.lang.Double, value: 24.0
2018-03-15 16:07:12,165 [DefaultQuartzScheduler_Worker-6] INFO  o.e.k.i.w.l.Logger - Received WireEnvelope from de.dentrassi.kura.addons.drools.component.wires.DroolsProcess-1521129031885-7
2018-03-15 16:07:12,165 [DefaultQuartzScheduler_Worker-6] INFO  o.e.k.i.w.l.Logger - Record List content: 
2018-03-15 16:07:12,165 [DefaultQuartzScheduler_Worker-6] INFO  o.e.k.i.w.l.Logger -   Record content: 
2018-03-15 16:07:12,165 [DefaultQuartzScheduler_Worker-6] INFO  o.e.k.i.w.l.Logger -     result : 24.0
2018-03-15 16:07:12,165 [DefaultQuartzScheduler_Worker-6] INFO  o.e.k.i.w.l.Logger - 

Conclusion

Of course the example is rather trivial. And the actual model and the way we wired it up is just an example. But of course you will bring your own model, based on your machine learning solutions, and have your own data to wire up. So go ahead and explore.

Dropping Apache File Install into Eclipse Kura

Sometimes the simple things may be the most valuable. Testing with Eclipse Kuraโ„ข on a Raspberry Pi (or any other Eclipse Kura device) may be a bit tricky. Of course can use the Eclipse UI in combination with mToolkit. But if you want to edit, compile, deploy from a local desktop machine, to a Kura device, then you either need to click through the Web UI for uploading your application. But for this to work you also need to assembly a DP (distribution package).

But what if you could simply drop an OSGi bundle into a directory and let it get picked up by Kura automatically. Thanks to Apache File Install, there already is such a solution. File Install scans a folder and loads every OSGi bundle located in this folder. If a bundle is started and it gets overwritten in the file system, then File Install will reload and restart the bundle.

So deploying and re-deploying to a Kura device is as easy a copying a file to your target with SCP (or the remote copy tool of your choice).

And installing Apache File Install into Eclipe Kura now just got a bit simpler.

Using Maven Central

Simply navigate to the Packages section of the Eclipse Kura Web UI. Press the “Install” button and choose “URL” and enter the following URL:

https://repo1.maven.org/maven2/de/dentrassi/kura/addons/de.dentrassi.kura.addons.utils.fileinstall/0.1.0/de.dentrassi.kura.addons.utils.fileinstall-0.1.0.dp

After confirming using the Submit button it will take a bit and then File Install will be installed into your Kura installation. Sometimes it takes as bit longer than Kura expects and you need to reload the Web UI (Ctrl-R) until Kura has performed the installation.

Using the Eclipse Marketplace

Currently the Eclipse Marketplace is focused on hosting plugins for the Eclipse IDE, but this should change rather soon. But still it is possible right now to drag and drop Apache File Install into Eclipse Kura with the use of the following install button:

Drag to your running Eclipse workspace to install Apache File Install for Eclipse Kura

Dragging this button into the Kura Web UI will bring up a confirmation dialog if you want to install the addon. After confirming it will go and fetch the DP and install Apache File Install into the running Kura instance.

Deploying bundles

Now it is time to deploy some bundles. By default the directory where Apache File Install looks for bundles is /opt/eclipse/kura/load. At first this directory will not exist, so it has to be created. Next we simply fetch and example bundle using wget:

$ cd /opt/eclipse/kura
$ mkdir load
$ wget "https://dentrassi.de/download/kura/org.eclipse.kura.example.camel.publisher-1.0.0.jar"

This is an example publisher bundle from the upcoming Kura 2.1.0 release. So if you are still using Kura 2.0, then you can either try a different OSGi bundle, or maybe give Kura 2.1 a try ๐Ÿ˜‰

Due to a regression in Kura (eclipse/kura#123) there is currently no way to manually start OSGi bundles. So you need to stop Kura and start the local command console (/opt/eclipse/kura/bin/start_kura.sh). On the OSGi shell you can then:

osgi> ss example
"Framework is launched."


id	State       Bundle
116	RESOLVED    org.eclipse.kura.example.camel.publisher_1.0.0
osgi> start 116
osgi> ss example
"Framework is launched."


id	State       Bundle
116	ACTIVE      org.eclipse.kura.example.camel.publisher_1.0.0
osgi>

Now once this initial activate has been performed, File Install and OSGi will keep the bundle active. You can re-deploy this OSGi bundle by simply copying a new version over the old one. File Install will detect the change and refresh the bundle.

There is moreโ€ฆ

Apache File Install can also update OSGi configurations and it can be configured using a set of system properties. For the full set of options check out the Apache File Install documentation.

Camel, Kura and OSGi, struggling with ‘sun.misc.Unsafe’

So here comes a puzzle for you … You do have Apache Camel (2.17), which internally uses com.googlecode.concurrentlinkedhashmap, which uses sun.misc.Unsafe. Now you can argue a lot about this is necessary or not. I just is that way. So starting up Apache Camel in an OSGi container which does strict processing of classes, using Apache Camel will run into a “java.lang.NoClassDefFoundError” issue due to “sun/misc/Unsafe”.

The cause

The cause is rather simple. Apache Camel makes use of sun.misc and so it should declare that in the OSGi manifest. OSGi R6 (and version before that as well) defines in section “3.9.4” of the core specification that java.* is forwarded to he parent class loader, but the rest is not. So sun.misc will not go the parent class loader (which finally is the JVM) by default.

Solutions

As always, there are a few. There may be a few more possible than I describe here, but I don’t want to list any which require changing Apache Camel itself.

Fragments

Two Fragments
Two Fragments
OSGi fragments are a way to enhance an already existing OSGi bundle. So the kind of merge in into the bundle. So it is possible to create a fragment for Apache Camel which does Import-Package: sun.misc. This should quickly resolve the issue as long as the bundle is installed into you OSGi container at the same time Apache Camel is, so that it is available at the time Apache Camel is started. The host bundle has to be org.apache.camel.camel-core, since this is the bundle requiring sun.misc.Unsafe.

Of course this brings up the next issue, there is nobody who exports sun.misc. But there is again a way to fix this.

The actual provider of sun.misc is the JVM. However the JVM does not know about OSGi. But the OSGi container itself, the framework, can act as a proxy. So if the framework bundle (aka bundle zero) would export sun.misc it would be able to actually resolve the class by using the JVM bootclasspath. The solution therefore is another fragment, which performs an Export-Package: sun.misc. That will bring both bundles with their fragments together, correctly wiring up sun.misc.

But as we have seen before, the fragment requires a “host bundle” and this would be different when e.g. using Apache Felix instead of Eclipse Equinox.

Again, there is a solution. The system bundle is also know as system.bundle. So the fragment can specify system.bundle with the attribute extension:=framework as bundle host:

Manifest-Version: 1.0
Bundle-ManifestVersion: 2
Bundle-SymbolicName: my.sun.misc.provider
Bundle-Version: 1.0.0
Fragment-Host: system.bundle; extension:=framework
Export-Package: sun.misc

Of course you can also export other JVM internal packages using that way.

There are only two things to keep in mind. First of all, but is is true to all other solutions as well, if the JVM does not provide sun.misc then this won’t work. Since the class cannot be found. Second, and this is specific to this solution, if you start “camel-core” before those two fragments are installed, then you need to “refresh” the Apache Camel Core bundle in order for the OSGi framework to re-wire your imports/exports.

There are also some pre-made extension bundles for this setup. Just search maven central.

Equinox and Felix

Some setups of Felix and Equinox do provide an “out of the box” workaround. Equinox for example does automatically forward all failed class lookups to the boot class loader, as a last resort, in the case the framework is started by using the org.eclipse.equinox.launcher_*.jar instead of the org.eclipse.osgi_*.jar launcher.

Bootclasspath delegation for Equinox

Eclipse Equinox also allows to set a few system properties in order to allow falling back to the bootclasspath and delegating the lookup of “sun.misc” to the JVM:

Also see: https://wiki.eclipse.org/Equinox_Boot_Delegation

osgi.compatibility.bootdelegation=true
This fill fall back to the bootclassloader like using the launcher “org.eclipse.equinox.launcher”

Bootclasspath delegation for all

The OSGi core specification also allows to configure direct delegation of lookups to the boot classloader (Section 3.9.3 of the OSGi core specificion):

org.osgi.framework.bootdelegation=sun.misc.*
This will forward requests for “sun.misc.*” directly to the boot class loader.

Conclusion

Now people may complain “oh how complicates this OSGi-thingy is”. Well, “sun.misc.Unsafe” was never intended to be used outside the JVM. Java 9 will correct this with their module system. OSGi already can do that. But it also provides a way to solve this.

If you prefer to use system properties, a different launcher or the “two fragment” approach, that is up to you and your situation. For me the problem simply was to make it happen without changing either Apache Camel or the launcher configuration of Eclipse Kura. So I went with the “two fragments” approach.

Thanks

I am just writing this down in order to help others. And I got help from others to solve this myself. So thanks to some people who posted this “on the net”, it is a long time, I stumbled over you googling about a solutions some time ago. Sorry I forgot where I initially found this solution.

Also thanks to Neil Bartlett for pointing out the OSGi conform solution with “org.osgi.framework.bootdelegation”.

Maven Tycho/JGit based build timestamps and the “target” directory

Now when you build OSGi bundles using Maven Tycho, you probably ran into the issue of creating a meaningful version qualifier (remember, an OSGi versions always is major.minor.micro.qualifier, so no dash and definitely no -SNAPSHOT).

There are a few approaches ranging from fully manual assignment of the build qualifier, simple timestamps and timestamps based on the last Git change.

The background

The latter one is described in the “Reproducible Version Qualifiers” wiki page of Tycho as a recipe to create the same qualifier from the same source code revision.

Actually the idea is pretty simple, so instead of the current timestamp, the last relevant change in the git repository, for the directory of the bundle, is located and then used to generate the timestamp based qualifier.

As a side note: Personally I came to the conclusion, that this sounds great in the beginning, but turns out to be troublesome later. First if all, the Build Qualifier plugin conflicts with the Source Ref Plugin, which generates a different manifest. Both plugins find different last commits and therefore a different MANIFEST.MF gets generated. So two builds produce two bundles, with the same qualifier, but actually (due to the MANIFEST.MF) different content, with two different checksums, which causes issues later on and has to be cleaned up by some baseline repository matching. In addition you simple cannot guarantee that two different builds come to the same result. Too many components (actually Maven and the local host) are outside of the source code repository and still influence the output of the build. But this post is about the JGit based timestamps ๐Ÿ˜‰

A simple configuration using the Git based approach looks like this in the parent pom file:


<plugin>
<groupId>org.eclipse.tycho</groupId>
<artifactId>tycho-packaging-plugin</artifactId>
<version>${tycho.version}</version>
<dependencies>
<dependency>
<groupId>org.eclipse.tycho.extras</groupId>
<artifactId>tycho-buildtimestamp-jgit</artifactId>
<version>${tycho-extras.version}</version>
</dependency>
</dependencies>
<configuration>
<timestampProvider>jgit</timestampProvider>
<jgit.ignore>
pom.xml
</jgit.ignore>
</configuration>
</plugin>

As you can see, there is a configuration property jgit.ignore which allows to exclude a set of files in the search of the last relevant commit. So git changes, which are only changing files which are ignored, are also ignored in this search for the last modification timestamp. Since the pom.xml will probably just get changed to point to a different parent POM, this seems like a good idea.

The problem

Now what does happen happen, when there are uncommitted changes in the working tree? Then it would not be possible for the build to determine the last relevant commit, since the change is not committed! Maven Tycho does provide a way to handle this (aka “Dirty working tree behaviour”) and will allow you to ignore this. Which might not be a good idea after all. The default behavior is to simply error and fail the build.

For me it became a real annoyance when it complained about the “target” directory itself. The truth is, this output directory should be added to the “.gitignore” file anyway, which would then also be respected by the git based build timestamp provider. But then again it should not fail the build just because of that.

Solution

But the solution to that was rather trivial. The jgit.ignore property follows the git ignore syntax and also allows to specify directories:


<jgit.ignore>
pom.xml
target/
</jgit.ignore>

There are two things which have to be kept in mind: each entry goes to a new line, the root of the evaluation seems no the be the root of the project, so using “/target/” (compared to “target/“) does not work.

IAdapterFactory with generics

Now I have been working with the Eclipse platform for quite a while. If you do so, you already might have run into the “adaptable” mechanism the Eclipse platform provides (article on EclipseZone).

The basics

The basic idea is to “cast” one object into the class of another, allowing to step into the process and maybe return a new object instance if casting is not possible, so adapting to the requested interface. This is nothing new, but comes in handy every now and then. Especially since the Eclipse platform allows an “external” adapter mechanism to control this adaption process. Simply assume you do have a class “MyModelDocument”, which is used throughout your Eclipse application. Now somebody selects the UI element, backed by an instance of your class and you want the Eclipse UI to show the properties of your instance in the Eclipse properties view. This is done by an instance of IPropertySource. At first this would mean you need you class to implement IPropertySource and do this for every other aspect you want to add to your model. In addition of implementing the interface you would also aggregate a lot of dependencies in the bundle of your model.

But there is a better way thanks to the adapter framework. First of all your class “MyModelDocument” can use the adapter framework and simply create and adapter class, which has to implement IPropertySource, but is backed by the original instance of your “MyModelDocument” class. Second, you can create a new bundle/plugin which defines an extension point named “org.eclipse.core.runtime.adapters” and implement a class based on IAdapterFactory.

Generics

Now a typical implementation of this class in Java 5+ looked like this:


public class MyAdapterFactory implements IAdapterFactory {
@SuppressWarnings ( "unchecked" )
@Override
public Object getAdapter (
final Object adaptableObject,
final Class adapterType ) {

if ( !(adaptableObject instanceof MyModelDocument) ) {
return null;
}

if ( IPropertySource.class.equals ( adapterType ) ) {
return new MyModelDocumentPropertySourceAdapter ( adaptableObject );
}

return null;
}

@SuppressWarnings ( "unchecked" )
@Override
public Class[] getAdapterList () {
return new Class[] { IPropertySource.class };
}
}

Of course the @SuppressWarnings for “unchecked” could be left out, but would trigger a bunch of warnings. The cause simply was that IAdapterFactory did not provide make use of Java 5 generics.

In a recent update of the Eclipse platform this interface has been extended to allow the use of generics, the method Object getAdapter (โ€ฆ) is now <T> T getAdapter(โ€ฆ). While this does not really benefit implementations of the class itself (IMHO), it cleans up the warnings if you do it right ๐Ÿ˜‰

Keep in mind that the type parameter <T> is a complete variable thing for the factory itself, since it will allow adapting to any kind if type some other class requests. So you actually will never be able to make a specific substitution for <T>. The return type of getAdapter() will change to T, which requires you to actually cast to T. Which can be done in two ways. Either by casting using:


return (T)new MyModelDocumentPropertySourceAdapter ( adaptableObject );

Which will trigger the next warning right away. Since there is no way to actually do the cast. Type erasure will kill the type information during runtime! The way to work around this has always been in Java to actually pass the type in such situations. Like a TypedQuery in JPA, the IAdapterFactory already has the type information as a parameter an so you can a programmatic cast instead:


return adapterType.cast ( new MyModelDocumentPropertySourceAdapter ( adaptableObject ) );

So the full code would look like:


public class MyAdapterFactory implements IAdapterFactory {
@Override
public <T> T getAdapter (
final Object adaptableObject,
final Class<T> adapterType ) {

if ( !(adaptableObject instanceof MyModelDocument) ) {
return null;
}

if ( IPropertySource.class.equals ( adapterType ) ) {
return adapterType.cast (
new MyModelDocumentPropertySourceAdapter ( adaptableObject )
);
}

return null;
}

@Override
public Class<?>[] getAdapterList () {
return new Class<?>[] { IPropertySource.class };
}
}

OSGi + JSP + JSTL

What is so easy with a standard JEE setup becomes quite painful using OSGi. Although there are very interesting projects and approaches like OSGi enRoute, Pax Web or Equinox JSP (and probably a few more), taking a step beyond “Hello World” starts to get quite painful.

OSGi has had support for registering servlets for quite a while. And it becomes even smoother using the HTTP whiteboard approach. But writing a servlet is, in most cases, not what you actually want. It is more like wiring method calls, service methods calls, to URLs, finally rendering it to HTML. Looking at the Spring WebMVC framework, this can be as easy as annotating a class with some @Controller annotation, returning a redirect to a JSP page.

Living in OSGi land, this sounds even better. Dynamically registering and referencing controllers and services. Configuring the application on the fly, during runtime, a dream come true.

Pretty soon its gets quite frustrating from there on. Equinox JSP is not too bad, but suffers from the Equinox HTTP service implementation which has a few bugs and drawbacks. Pax Web is fine, but the whiteboard pattern, although the same name, has nothing to do with OSGi HTTP whiteboard. Most other tutorials around OSGi and HTTP focus on registering a servlet. Since this is pretty much the standard specification right now. Everything around JSP is self made for each framework and mostly works around issues in Apache Jasper. Since Jasper seems to be the only JSP implementation, but it is so deeply tied to JEE, that it is really hard to use it in a different environment. So most tools simply wrap classloaders and tweak “getResource” methods in order to let Jasper think it is in an JEE environment.

Looking at what other JEE applications do, it really seems that everybody does use Jasper. In different patched versions. Tomcat of course, JBoss (aka Wildfly), Glassfish an Geronimo. Also Equinox JSP and Pax Web have their own wrapped and patched Jasper version.

Now it comes to JSTL, sure, you want to have all the fuzz when you develop JEEish applications. Pax Web really does consider looking up dependent bundles for tag libraries. Where Equinox JSP only scans the “Bundle-ClassPath” jars. Apache Jasper however simply ignores the “core” JSTL tag library, although it might get detected on the class path.

Now the good point is, it’s OSGi, and with a little bit of effort you can throw different frameworks together into one pot. Taking Equinox as OSGi framework, Pax Web for providing the Http Service, Equinox JSP for a non-intrusive JSP servlet and a little bit of custom code for the Spring MVC controller like framework, Package Drone got a nice little web framework. The JSTL tags are provided by JBoss JSTL, which feature a OSGi version of the tags.

While the simple servlets are plain Pax Web registrations, including the Equinox JSP servlet, the Spring MVC like setup is a custom part of Package Drone, but with some reusability in mind. A main dispatcher servlet picks up all services which are registered with a @Controller annotation. Calls are simply routed to service methods. The result is a reference to a JSP page, which now actually is part of the controller bundle and not the servlet. The dispatcher mechanism takes care of this an on the one side alters the redirection to the JSP so that the bundle is part of the redirect path, and on the other side registers all relevant JSP resources in a bundle with the JSP servlet of Equinox JSP.

I took quite a while and cost some nerves … but it seems that the next version of Package Drone will have a web framework which is based on OSGi HttpService, supports controller style services and still feels a bit like JEE ๐Ÿ˜‰

Package Drone – what’s next?!

Every now and then there is some time for Package Drone. So let’s peek ahead what will happen in the next few weeks.

First of all, there is the Eclipse DemoCamp in Munich, at which Package Drone will he presented. So if you want to talk in person, come over and pay us a visit.

Also I have been working on version 0.8.0. The more you think about it, the more ideas you get of what could be improved. If I only got the time. But finally it is time for validation! Channels and artifacts can be validated and the outcome will be presented in red and yellow, and a lot more detail ;-). This is a first step towards more things we hope to achieve with validation, like rejecting content and proving resolution mechanisms. Quick fix your artifacts ๐Ÿ˜‰

Also there are a few enhancements to make it easier for new users to start with Package Drone. “Channel recipes” for example setup up and configure a channel for a specific purpose, just to name one.

Of course this is important since, with a little bit of luck, there will be an article in the upcoming German “Eclipse Magazin“, which might bring some new users. Helping them to have an easy start is always a good idea ๐Ÿ˜‰

The next version also brings a new way to upload artifacts. A plain simple HTTP request will do to upload a new artifact. While I would not call it “API”, it definitely is the starting point of exactly that. Planned is a command line client and already available is the Jenkins plugin for Package Drone. It allows to archive artifacts directly to a Package Drone channel, including adding some meta data of the build.

So, if you have more ideas, please raise an issue in the GitHub issue tracker.

Meanwhile @ Package Drone

Since Package Drone has its own home now, I would simple like to sum up here what progress Package Drone has made in the last few weeks.

First of all, the most recent release, as of now, is 0.4.0. The last two releases were mostly focused about the processing of zipped P2 repositories and what comes with that. These can be processed in two different ways now. Either using the Unzip adapter, which is more like a way of deep linking, but still allows one to access a P2 repository inside that ZIP artifact. The second way it the P2 repository unzipper aspect, which unzips bundles and features and create virtual child artifacts. The second approach makes these artifacts available to all other Package Drone functionality, but also modifies the original content but unzipping and creating new meta data. However both variants can be used at the same time!

There is also a setup for OpenShift, and a quickstart at the OpenShift Hub. So if you want to try out Package Drone, the most simplest ways, just create a free account at OpenShift und simply deploy a new Package Drone setup with a few clicks. Including the database setup.

If course there have been lots of things cleaned up and improved in the UI and the backend system, but this is more a topic for the actual release notes at GitHub.

So the question is, what the future will bring. One thing I would like to see Postgres again as a database. With the most recent Postgres JDBC driver and some help from my colleague, this might be feature appearing in one of the next versions. MySQL works fine, but also has a very bad behavior when it comes to BLOB support. And since all artifacts are stored in the database, this can cause some huge memory requirement. Hopefully Postgres does a better job here.

Of course there is also the idea of storing the artifacts separately in the file system. While this requires a little bit of extra processing when it comes to backup up your system, it might be right time to add a full backup and restore process to Package Drone. This would also solve the problem of how to switch between storage backends.

And of course, I you would like to help out, please report bugs and become a contributor ๐Ÿ˜‰