Kapua

5 posts

Kapua micro client SDK, running on a microcontroller

A few weeks back, while being at EclipseCon France, I did stumble over a nice little gadget. There was talk from MicroEJ around Java on microcontrollers. And they where showing an IoT related demo based on their development environment. And it seemed they did have Eclipse Paho (including TLS) and Google Protobuf running on their JVM without too much troubles.

ST Board with Thermocloud

My first idea was to simply drop the Kapua Gateway Client SDK on top of it, implementing the cloud facing API of MicroEJ and let their IoT demo publish data towards Kapua.
After a few days I was able to order such a STM32F746G-DISCO board myself and play a little bit around with it. It quickly turned out that is was pretty easy to drop some Java code on the device, using the gateway client SDK was not an option. The MicroEJ JVM is based on Java CDLC 8. Sounds like Java 8, right? Well, it is more like Java 7. Aside from a few classes which are missing, the core features missing where Java 8’s lambdas and enhancements to interfaces (like static methods and default methods).

Rewriting the gateway client SDK in Java 7, dropping the shiny API which we currently have, didn’t sound very appealing. But then again, implementing the Kapua communication stack actually isn’t that complicated and such an embedded device wouldn’t really need the extensibility and modularity of the Java 8 based gateway client SDK. So in a few hours there was the Kapua micro client SDK, which doesn’t consume any dependencies other than Paho and Protobuf and also only uses a minimal set of Java 7 functionality.

The second step was to implement the MicroEJ specific APIs and map the calls to the Kapua micro SDK, which wasn’t too difficult either. So now it is possible to simply install the “Kapua Data Channel Provider” from the MicroEJ Community Store. Alternatively you can compile the sources yourself as the code for this adapter is also on GitHub. Once the data channel provider is installed you can fire up any application consuming the DataChannel API, like the “Thermocloud” application, and publish data to Kapua. Please be sure to follow the installation instructions on the Kapua data channel provider for configuring the connection to your Kapua instance.

Kapua Data Channel Provider
Data from Thermocloud in Kapua

As the micro client is capable of running on Java 7, it might also be a choice for people wanting to connect from Android to Kapua without the need to go for Java 8. As Java 8 on Android still seems to be rather painful, this could be an option.

Also see:

I would like to thank Laurent and Frédéric from MicroEJ, who did help me fix all the noob-issues I had.

Talking to the cloud

While working on Eclipse Kapua, I wanted to do different tests, pushing telemetry data into the system. So I started to work on the Kura simulator, which can used to simulator an Eclipse Kura IoT gateway in a plain Java project, no special setup required. Now that helped a lot for unit testing and scale testing. Even generating a few simple telemetry data streams for simulating data works out of the box.

But then again I wanted to have something more lightweight and controllable. With the simulator you actually derive some a simple class and get fully controlled by the simulator framework. That may work well in some cases, but in others you may want to turn over the control to the actual application. Assume you already have a component which is “in charge” of your data, and now you want to push this into the cloud. Of course you can do this somehow, working around that. But creating a nice API for that, which is simple and easy to understand is way more fun 😉

So here is my take on a Gateway Client API, sending IoT data to the cloud, consuming command & control from it.

Intentions

I wanted to have a simple API, easy to understand, readable. Preventing you from making mistakes in the first place. And if something goes wrong, it should go wrong right away. Currently we go with MQTT, but there would be an option to go with HTTP as well, or AMQP in the future. And also for MQTT we have Eclipse Paho and FUSE MQTT. Both should be available, both may have special properties, but share some common ground. So implementing new providers should be possible, while sharing code should be easy as well.

Example

Now here is with what I came up with:

try (Client client = KuraMqttProfile.newProfile(FuseClient.Builder::new)
  .accountName("kapua-sys")
  .clientId("foo-bar-1")
  .brokerUrl("tcp://localhost:1883")
  .credentials(userAndPassword("kapua-broker", "kapua-password"))
  .build()) {

  try (Application application = client.buildApplication("app1").build()) {

    // subscribe to a topic

    application.data(Topic.of("my", "receiver")).subscribe(message -> {
      System.out.format("Received: %s%n", message);
    });

    // cache sender instance

    Sender sender = application
      .data(Topic.of("my", "sender"))
      .errors(ignore());

    int i = 0;
    while (true) {
      // send
      sender.send(Payload.of("counter", i++));
      Thread.sleep(1000);
    }
  }
}

Looks pretty simple right? On the background the MQTT connection is managed, payload gets encoded, birth certificates get exchanges and subscriptions get managed. But still the main application is in control of the data flow.

How to do this at home

If you want to have a look at the code, it is available on GitHub (ctron/kapua-gateway-client) and ready to consume on Maven Central (de.dentrassi.kapua). But please be aware of the fact that this is a proof-of-concept, and may never become more than that.

Simply adding the following dependency to your project should be enough:

<dependency>
  <groupId>de.dentrassi.kapua</groupId>
  <artifactId>kapua-gateway-client-provider-mqtt-fuse</artifactId>
  <version>0.2.0</version> <!-- check for a more recent version -->
</dependency>

With this dependency you can use the example above. If you want to got for Paho instead of FUSE use kapua-gateway-client-provider-mqtt-paho instead.

Taking for a test drive

Now taking this for a test drive as even more fun. Eclipse SmartHome has the concept of a persistence system, where telemetry data gets stored in a time series like database. There exists a default implementation for rrdb4j. So re-implementing this interface for Kapua was quite easy and resulted in an example module which can be installed into the Karaf based OpenHAB 2 distribution with just a few commands:

openhab> repo-add mvn:de.dentrassi.kapua/karaf/0.2.0/xml/features
openhab> feature:install eclipse-smarthome-kapua-persistence

Then you need to re-configure the component over the “Paper UI” and point it towards your Kapua setup. Maybe you will need to tweak the “kapua.persist” file in order to define what gets persisted and when. And if everything goes well, your temperate readings will get pushed from SmartHome to Kapua.

More information

Google Summer of Code 2017 with Eclipse Kapua

I am happy to announce that Eclipse Kapua got two slots in this year’s Google Summer of Code. Yes, two projects got accepted, and both are for the Eclipse Kapua project.

Anastasiya Lazarenko will provide a simulation of a fish tank and Arthur Deschamps will go for a supply chain simulation. Both simulations are planned to feed in their data into Eclipse Kapua using the Kura simulator framework. Although both projects seem to be quite similar from a high level perspective, I think they are quite different when it comes to the details.

The basic idea is not to provide something like a statistically/physically/… valid simulation, but something to play around and interact with. Spinning up a few virtual instances of both models and hooking them up to our cloud based IoT solution and interact a bit with them, getting some reasonable feedback values.

For Kapua this will definitely mean evolving the simulator framework based on the feedback from both students, making it (hopefully) easier to use for other tasks. And maybe, just maybe, we can also got for the extra mile and make the same simulations available for Eclipse Hono.

If you want to read more about Anastasiya and Arthur just read through their introductions on kapua-dev@eclipse.org and give them a warm welcome:

read Anastasiya’s introduction
read Arthur’s introduction

Best of luck to you!

Simulating telemetry streams with Kapua and OpenShift

Sometimes it is necessary to have some simulated data instead of fancy sensors attached to your IoT setup. As Eclipse Kapua starts to adopt Elasticsearch, it started to seem necessary to actually unit test the inbound telemetry stream of Kapua. Data coming from the gateway, being processed by Kapua, then stored into Elasticsearch and then retrieved back from Elasticsearch over the Kapua REST API. A lot can go wrong here 😉

The Kura simulator, which is now hosted in the Kapua repository, seemed to be right place to do this. That way we can not only test this inside Kapua, but we can also allow different use cases for simulating data streams outside of unit tests and we can leverage the existing OpenShift integration of the Kura Simulator.

The Kura simulator has the ability now to also send telemetry data. In addition to that there is a rather simple simulation model which can use existing value generators and map those to a more complex metric setup.

From a programmatic perspective creating a simple telemetry stream would look this:

GatewayConfiguration configuration = new GatewayConfiguration("tcp://kapua-broker:kapua-password@localhost:1883", "kapua-sys", "sim-1");
try (GeneratorScheduler scheduler = new GeneratorScheduler(Duration.ofSeconds(1))) {
  Set apps = new HashSet<>();
  apps.add(simpleDataApplication("data-1", scheduler, "sine", sine(ofSeconds(120), 100, 0, null)));
  try (MqttAsyncTransport transport = new MqttAsyncTransport(configuration);
       Simulator simulator = new Simulator(configuration, transport, apps);) {
      Thread.sleep(Long.MAX_VALUE);
  }
}

The Generators.simpleDataApplication creates a new Application from the provided map of functions (Map<String,Function<Instant,?>>). This is a very simple application, which reports a single metric on a single topic. The Generators.sine function returns a function which creates a sine curve using the provided parameters.

Now one might ask, why is this a Function<Instant,?>, wouldn’t a simple Supplier be enough? There is a good reason for that. The expectation of the data simulator is actually that the telemetry data is derived from the provided timestamp. This is done in order to generate predictable timestamp and values along the communication path. In this example we only have a single metric in a single instance. But it is possible to scale up the simulation to run 100 instances on 100 pods in OpenShift. In this case each simulation step in one JVM would receive the same timestamp and this each of those 100 instances should generate the same values. Sending the same timestamps upwards to Kapua. Now validating this information later on because quite easy, as you not only can measure the time delay of the transmission, but also check if there are inconsistencies in the data, gaps or other issues.

When using the SimulationRunner, it is possible to configure data generators instead of coding:

{
 "applications": {
  "example1": {
   "scheduler": { "period": 1000 },
   "topics": {
    "t1/data": {
     "positionGenerator": "spos",
     "metrics": {
      "temp1": { "generator": "sine1", "name": "value" },
      "temp2": { "generator": "sine2", "name": "value" }
     }
    },
    "t2/data": {
     "metrics": {
      "temp1": { "generator": "sine1", "name": "value" },
      "temp2": { "generator": "sine2", "name": "value" }
     }
    }
   },
   "generators": {
    "sine1": {
     "type": "sine", "period": 60000, "offset": 50, "amplitude": 100
    },
    "sine2": {
     "type": "sine", "period": 120000, "shift": 45.5, "offset": 30, "amplitude": 100
    },
    "spos": {
     "type": "spos"
    }
   }
  }
 }
}

For more details about this model see: Simple simulation model in the Kapua User Manual.

And of course this can also be managed with the OpenShift setup. Loading a JSON file works like this:

oc create configmap data-simulator-config --from-file=KSIM_SIMULATION_CONFIGURATION=../src/test/resources/example1.json
oc set env --from=configmap/data-simulator-config dc/simulator

Finally it is now possible to visually inspect this data with Grafana, directly accessing the Elasticsearch storage:

Testing Kapua with simulated Kura gateways

Now you got your pretty new OpenShift setup of Eclipse Kapua and want to give your IoT cloud a test run?! Testing it out with 100 devices, just for fun? Or even more? But you are too lazy to flash 1000 SD cards for your Raspberry Pi cluster? Here comes the Kura simulator framework. 😉

In order to provide some automatic testing for Kapua I started working on a simulator framework which does simulate Kura instances completely in Java. No backend needed, no hardware needed, able to run multiple instances in a single JVM. And all hosted on GitHub at ctron/kura-simulator.

A screenshot of Kura simulator instances in Kapua
Kura simulator instances in Kapua

The basic idea was to create a set of classes which can be used in automated unit tests in order to simulate a Kura gateway, but allow for a finer grained control over it for testing the good, the bad and the ugly. A real Kura instance would of course be a more realistic test partner, but then again this would have quite a few drawbacks. First of all, Kura cannot be embedded into a unit or integration test. It has far too many dependencies to directory structures, command line utilities, native libraries and it would also require an OSGi container to be started. Second, Kura would always behave like Kura. Now for some tests this may be fine, but if you want to test corner cases where the gateway responds in a way which is not expected by Kapua, then this cannot be done with Kura.

So running a single Kura simulator can be as easy as:

ScheduledExecutorService downloadExecutor = 
   Executors.newSingleThreadScheduledExecutor(new NameThreadFactory("DownloadSimulator"));

GatewayConfiguration configuration =
   new GatewayConfiguration("tcp://kapua-broker:kapua-password@localhost:1883", "kapua-sys", "sim-1");

Set<Application> apps = new HashSet<>();
apps.add(new SimpleCommandApplication(s -> String.format("Command '%s' not found", s)));
apps.add(AnnotatedApplication.build(new SimpleDeployApplication(downloadExecutor)));

try (MqttSimulatorTransport transport = new MqttSimulatorTransport(configuration);
     Simulator simulator = new Simulator(configuration, transport, apps);) {
    Thread.sleep(Long.MAX_VALUE);
    logger.info("Bye bye...");
} finally {
  downloadExecutor.shutdown();
}

Of course, scaling this up and running a few more instances of this isn’t a big deal either. Running this in a docker container and scaling this up even more with OpenShift works fine as well. So testing any number of Gateways just became a lot easier.

Currently the simulator can emulate the command service (V1) and most of the deploy service (V2). The configuration service is still missing, but should get implemented in the next few days. Of course it is also possible to register a custom application and provide some metrics yourself.