Technical Stuff

77 posts

Assorted technical stuff of all kind.

Manually reclaiming a persistent volume in OpenShift

When you have a persistent volume in OpenShift configured with “Retain”, the volume will switch to “Released” after the claim has been deleted. But what now? How to manually recycle it? This post will give a brief overview on how to manually reclaim the volume.

Deleting the claim

Delete the persistent volume claim in OpenShift is simple, either using the Web UI or by executing:

$ oc delete pvc/my-claim

If you check, then you will see the persistent volume is “Released” but not “Available”:

$ oc get pv
NAME              CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS     CLAIM                         REASON    AGE
my-pv             40Gi       RWO           Retain          Released   my-project/my-claim                     2d

What the documentation tells us

The OpenShift documentation states:

By default, persistent volumes are set to Retain. NFS volumes which are set to Recycle are scrubbed (i.e., rm -rf is run on the volume) after being released from their claim (i.e, after the user’s PersistentVolumeClaim bound to the volume is deleted). Once recycled, the NFS volume can be bound to a new claim.

At a different location it simply says:

Retained reclaim policy allows manual reclamation of the resource for those volume plug-ins that support it.

But how to actually do that? How to manually reclaim the volume?

Reclaiming the volume

First of all ensure that the data is actually deleted. Using NFS you will need to manually delete the content of the share using e.g. rm -Rf /exports/my-volume/*, but the be sure to the keep the actual export directory in place.

Now it is time to actually make the PV available again for being claimed. For this the reference to the previous claim (spec/claimRef) has to be removed from the persistent volume. You can manually do this from the Web UI or with short command from the shell (assuming you are using bash):

$ oc patch pv/my-pv --type json -p $'- op: remove\n  path: /spec/claimRef'
"my-pv" patched

This should return the volume into state “Available”:

$ oc get pv
NAME              CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS     CLAIM                         REASON    AGE
my-pv             40Gi       RWO           Retain          Available  my-project/my-claim                     2d

OPC UA solutions with Eclipse Milo

Eclipse IoT
This article walks you through the first steps of creating an OPC UA solution based on Eclipse Milo. OPC UA, also known as IEC 62541, is an IoT solution for connecting industrial automation systems. Eclipse Milo™ is an open-source Java based implementation.

What is OPC UA?

OPC UA is a point-to-point, client/server based communication protocol used in industrial automation scenarios. It offers APIs for telemetry data, command and control, historical data, alarming and event logs. And a bit more.

OPC UA is also the successor of OPC DA (AE, HD, …) and puts a lot more emphasis on interoperability than the older, COM/DCOM based, variants. It not only offers a platform neutral communication layer, with security built right into it, but also offers a rich set of interfaces for handling telemetry data, alarms and events, historical data and more. OPC clearly has an industrial background as it is coming from an area of process control, PLC, SCADA like systems. It is also known as IEC 62541.

Looking at OPC UA from an MQTT perspective one might ask, why do we need OPC UA? Where MQTT offers a completely undefined topics structure and data types, OPC UA provides a framework for standard and custom datatypes, a defined (hierarchical) namespace and a definition for request/response style communication patterns. Especially the type system, even with simple types, is a real improvement over MQTT’s BLOB approach. With MQTT you never know what is inside your message. It may be a numeric value encoded as string, a JSON encoded object or even a picture of a cat. OPC UA on the other side does offer you a type system which holds the information about the types, in combination with the actual values.

OPC UA’s subscription model also provides a really efficient way of transmitting data in a lively manner, but only transmitting data when necessary as defined by client and server. In combination with the binary protocol this can a real resource safer.

The architecture of Eclipse Milo

Traditionally OPC UA frameworks are split up in “stack” and “SDK”. The “stack” is the core communication protocol implementation. While the “SDK” is building on top of that, offering a simpler application development model.

Eclipse Milo Components

Eclipse Milo offers both “stack” and “SDK” for both “client” and “server”. “core” is the common code shared between client and server. This should explain the module structure of Milo when doing a search for “org.eclipse.milo” on Maven Central:

org.eclipse.milo : stack-core
org.eclipse.milo : stack-client
org.eclipse.milo : stack-server
org.eclipse.milo : sdk-core
org.eclipse.milo : sdk-client
org.eclipse.milo : sdk-server

Connecting to an existing OPC UA server would require you to use “sdk-client” only, as all the other modules are transient dependencies of this module. Likewise, creating your own OPC UA server would also only require the “sdk-server” modules.

Making contact

Focusing on the most common use case of OPC, data acquisition and command & control, we will now create a simple client which will read out telemetry data from an existing OPC UA server.

The first step is to look up the “endpoint descriptors” from the remote server:

EndpointDescription[] endpoints =
  UaTcpStackClient.getEndpoints("opc.tcp://localhost:4840")
    .get();

The trailing .get() might have tipped you off that Milo makes use of Java 8 futures and thus easily allows asynchronous programming. However this example will use the synchronous .get() call in order to wait for a result. This will make the tutorial more readable as we will look at the code step by step.

Normally the next step would be to pick the “best” endpoint descriptor. However “best” is relative and highly depends on your security and connectivity requirements. So we simply pick the first one:

OpcUaClientConfigBuilder cfg = new OpcUaClientConfigBuilder();
cfg.setEndpoint(endpoints[0]);

Next we will create and connect the OPC client instance based on this configuration. Of course the configuration offers a lot more options. Feel free to explore them all.

OpcUaClient client = new OpcUaClient(cfg.build());
client.connect().get();

Node IDs & the namespace

OPC UA does identify its elements, objects, folders, items by using “Node IDs”. Each server has multiple namespaces and each namespace has a tree of folders, objects and items. There is a browser API which allows you to browse through this tree, but this is mostly for human interaction. If you know the Node ID of the element you would like to access, then you can simply provide the node ID. Node IDs can be string encoded and might look something like ns=1;i=123. This example would reference to a node in namespace #1 identified by the numeric id “123”. Aside from numeric IDs, there as also string IDs (s=), UUID/GUID IDs (g=) and even a BLOB type (b=).

The following examples will assume that a node ID has been parsed into variables like nodeId, which can be done by the following code with Milo:

NodeId nodeIdNumeric = NodeId.parse("ns=1;i=42");
NodeId nodeIdString  = NodeId.parse("ns=1;s=foo-bar");

Of course instances of “NodeId” can also be created using the different constructors. This approach is more performant than using the parse method.

NodeId nodeIdNumeric = new NodeId(1, 42);
NodeId nodeIdString  = new NodeId(1, "foo-bar");

The main reason behind using Node IDs is, that those can be efficiently encoded when they are transmitted. For example is it possible to lookup an item by a larger string based browse path and then only use the numeric Node ID for further interaction. Node IDs can also be efficiently encoded in the OPC UA binary protocol.

Additionally there is a set of “well known” node IDs. For example the root folder always has the node ID ns=0;i=84. So there is no need to look those up, they can be considered constants and be directly used. Milo defines all well known IDs in the Identifiers class.

Reading data

After the connection has been established we will request a single read of a value. Normally OPC UA is used in an event driven manner, but we will start simple by using a single requested read:

DataValue value =
  client.readValue(0, TimestampsToReturn.Both, nodeId)
    .get();

The first parameter, max age, lets the server know that we may be ok reading a value which is a bit older. This could reduce traffic to the underlying device/system. However using zero as a parameter we request a fresh update from the value source.

The above call is actually a simplified version of a more complex read call. In OPC UA items do have attributes and there is a “main” attribute, the value. This call defaults to reading the “value” attribute. However it is possible to read all kinds of other attributes from the same item. The next snippet shows the more complex read call, which allows to not only read different attributes of the item, but also multiple items at the same time:

ReadValueId readValueId =
  new ReadValueId(nodeId, AttributeId.Value.uid(), null, null);

client
  .read(0, TimestampsToReturn.Both, Arrays.asList(readValueId))
    .get();

Also for this read call we do request both, the server and source timestamp. OPC UA will timestamp values and so you know when the value switched to this reported value. But it is also possible that the device itself does the timestamping. Depending on your device and application, this can be a real benefit to your use case.

Subscriptions

As already explained, OPC UA can do way better than explicit single reads. Using subscriptions it is possible to have fine grained control over what you request and even how you request data.

When coming from MQTT you know that you get updates once they got published. However you have no control over the frequency you get those updates. Imagine your data source is originally some temperate sensor. It probably is capable of supplying data in a sub-microsecond resolution and frequency. But pushing a value update every microsecond, even if no one listens, is a waste of resources.

In OPC UA

OPC UA does allow you to take control over the subscription process from both sides. When a client creates a new subscription it will provide information like the number of in-flight events, the rate of updates, … the server has the ability to modify the request, but will try to adhere to it. It will then start serving the request. Also will data only be sent of there are actual changes. Imagine a pressure sensor, the value may stay the same for quite a while, but then suddenly change rather quickly. So re-transmitting the same value over and over again, just to achieve high frequency updates when an actual change occurs is again a waste of resources.

OPC UA Subscriptions Example

In order to achieve this in OPC UA the client will request a subscription from the server, the server will fulfill that subscription and notify the client of both value changes and subscription state changes. So the client knows if the connection to the device is broken or if there are simply no updates in the value. If no value changes occurred nothing will be transmitted. Of course there is a heartbeat on the OPC UA connection level (one for all subscriptions), which ensures detection communication loss as well.

In Eclipse Milo

The following code snippets will create a new subscription in Milo. The first step is to use the subscription manager and create a new subscription context:

// what to read
ReadValueId readValueId =
    new ReadValueId(nodeId, AttributeId.Value.uid(), null, null);

// monitoring parameters
int clientHandle = 123456789;
MonitoringParameters parameters =
    new MonitoringParameters(uint(clientHandle), 1000.0, null, uint(10), true);

// creation request
MonitoredItemCreateRequest request =
    new MonitoredItemCreateRequest(readValueId, MonitoringMode.Reporting, parameters);

The “client handle” is a client assigned identifier which allows the client to reference the subscribed item later on. If you don’t need to reference by client handle, simply set it so some random or incrementing number.

The next step will define an initial setup callback and add the items to the subscription. The setup callback will ensure that the newly created subscription will be able to hook up listeners before the first values are received from the server side, without any race condition:

// The actual consumer

BiConsumer<UaMonitoredItem, DataValue> consumer =
  (item, value) ->
    System.out.format("%s -> %s%n", item, value);

// setting the consumer after the subscription creation

BiConsumer<UaMonitoredItem, Integer> onItemCreated =
  (monitoredItem, id) ->
    monitoredItem.setValueConsumer(consumer);

// creating the subscription

UaSubscription subscription =
    client.getSubscriptionManager().createSubscription(1000.0).get();

List<UaMonitoredItem> items = subscription.createMonitoredItems(
    TimestampsToReturn.Both,
    Arrays.asList(request),
    onItemCreated)
  .get();

Taking Control

Of course consuming telemetry data is fun, but sometimes it is necessary to issue control commands as well. Issuing a command or setting a value on the target device is as easy as:

client
  .writeValue(nodeId, DataValue.valueOnly(new Variant(true)))
    .get();

This snippet will send the value TRUE to the object identified by nodeId. Of course, like for the read, there are more options when writing values/attributes of an object.

But wait … there is more!

This article only scratched the complex topic of OPC UA. But it should have given you a brief introduction in OPC UA and Eclipse Milo. And it should get you started into OPC UA from a client perspective.

Watch out for the repository ctron/milo-ece2017 which will receive some more content getting into OPC UA and Eclipse Milo and which will be accompanying repository for my talk Developing OPC UA with Eclipse Milo™ at EclipseCon Europe 2017.

I would like to thank Kevin Herron for not only helping me with this article, but for helping me so many times understanding Milo and OPC UA.

Also see:

AsyncAPI Java Tools 0.0.4 released

It started a as proof-of-concept, checking out AsyncAPI and learning something about it. Actually that topic came to me from two different angles. We where looking into solutions for RPC over messaging (namely AMQP 1.0) instead of HTTP/REST. The second angle was a discussion with a very good friend of mine of “how do you define today an API about messages from the server to the client”. Everyone is talking about Swagger and OpenAPI, but this seems to be all request/response for HTTP based APIs.

And then someone mentioned AsyncAPI, I had to look into this. And this is what I came up with so far.

AsyncAPI tools

ctron/asyncapi is a set of tools, written in Java, around AsyncAPI in general. Reading the specification file into a Java model, a code generator for client and server side code, a few base classes which the generate code makes use of.

The definition file parser still is not optimal. The idea is that the definition is based on JSON schema, but I couldn’t find any good support for JSON schema in Java at the moment. So right now it all home-brew and although it does get the job done, I don’t think it is pretty.

Code generation

The code generation is a bit more sophisticated. The idea of taking this out of the Maven plugin came pretty early. I guess it would be cool to have a gradle plugin at some time in the future and the Maven plugin really is just a small wrapper around the code generator. So that should easily be possible.

The code generation is backed by Eclipse JDT, which allows you to create and parse Java code in a DOM style way. I am constantly torn apart between liking and hating this at the same time. I did work with generic model-to-text tools in the past, and tools like Acceleo or Xpand are way easier to read than Java code generating Java code in that way. On the other side you would need a full blown Ecore model before and then wrap this again by a Maven plugin. I am not sure this is fun either. Also, using the DOM approach, it is quite simple to write extension modules for the code generation, which allow to actually process the generated Java DOM and extend it, without the requirement to write an awfully complex code generation template. So let’s see where this is going in the future.

What works?

Don’t expect productive code … yet 😉 Currently the code generator will create the defined types/schemas, the messages and their payload. Topics will be parsed into services, versions and actions and generate Client and Server interfaces. Also is there a client and server implementation using AMQP and Qpid JMS. This already allows you to communicate between client and server. Right now GSON will be used for creating JSON payload serialization. But I think that it should be pretty simple to swap this with e.g. Jackson.

What does not work?

The AsyncAPI specification actually defines a bit more than the tooling can currently handle. The server section is missing, various meta data like license and descriptions are not supported. But the most important thing which is currently missing IMHO is some implementation backed on MQTT. Not that I am a big fan on MQTT, I would prefer AMQP over that in this case. But I would like to see two different AsyncAPI partners communicate via MQTT/JSON. One of them being generated by this tool-set and the other one based on something completely different.

Also is JSON schema more powerful than what the parser can currently handle. I am not sure if JSON schema and Java is a very good fit, but if you want to go AsyncAPI, then you need to work with JSON schema. And this toolset needs to do a better job here.

AsyncAPI Maven Plugin

The AsyncAPI Maven plugin takes the code generator from the main tools project and wraps in into a Maven plugin. The idea is to simply drop in you AsyncAPI YAML file and let the Maven plugin generate the code for it. Of course this is Eclipse M2E aware, so that you can simply safe your YAML file and Eclipse will on the fly generate new code for you.

Examples

Interested in how this would look like in the form of source code? Well, here are some examples. Also be sure to check out my examples repository.

First we need to create a builder for either the server or the client implementation:

Builder<JmsClient> builder = JmsClient.newBuilder()
  .host("localhost")
  .profile(AmqpProfile.DEFAULT_PROFILE)
  .payloadFormat();

Next create an instance from it, the following is a client instance which will listen to some server-side event:

try (JmsClient client = builder.build();
     ListenerHandle listener =
        client.accounts()
          .eventUserSignup().subscribe(System.out::println)) {

  System.out.println("Waiting for messages…");
  Thread.sleep(Long.MAX_VALUE);

}

The of course we need a server to publish messages:

try (JmsServer client = builder.build()) {

  client.accounts().eventUserSignup()
    .publish(newUserMessage())
      .toCompletableFuture()
      .get();

}

Of course there is no need to actively wait for the message to be sent with get() if you don’t need to. But you can use the AsyncAPI in sync way if you like to 😉

What is next?

There is a lot to do. Really, a lot! I would like to make an interop test with MQTT. There are several fields from the specification which are currently not supported. The server side JMS API isn’t really suitable for JEE style programming, especially when it comes to container managed JMS.

So I hope I will find some time to work on the MQTT backed implementation, because that would validate that two different AsyncAPI tools could work together. And I think this is what it is all about.

Before I forget…

The initial idea of why I looked into AsyncAPI was to get message based request/response. Well, that is something which AsyncAPI doesn’t really provide. But, to be fair, it would only require a few changes to add this to the specification.

See also

Kapua micro client SDK, running on a microcontroller

A few weeks back, while being at EclipseCon France, I did stumble over a nice little gadget. There was talk from MicroEJ around Java on microcontrollers. And they where showing an IoT related demo based on their development environment. And it seemed they did have Eclipse Paho (including TLS) and Google Protobuf running on their JVM without too much troubles.

ST Board with Thermocloud

My first idea was to simply drop the Kapua Gateway Client SDK on top of it, implementing the cloud facing API of MicroEJ and let their IoT demo publish data towards Kapua.
After a few days I was able to order such a STM32F746G-DISCO board myself and play a little bit around with it. It quickly turned out that is was pretty easy to drop some Java code on the device, using the gateway client SDK was not an option. The MicroEJ JVM is based on Java CDLC 8. Sounds like Java 8, right? Well, it is more like Java 7. Aside from a few classes which are missing, the core features missing where Java 8’s lambdas and enhancements to interfaces (like static methods and default methods).

Rewriting the gateway client SDK in Java 7, dropping the shiny API which we currently have, didn’t sound very appealing. But then again, implementing the Kapua communication stack actually isn’t that complicated and such an embedded device wouldn’t really need the extensibility and modularity of the Java 8 based gateway client SDK. So in a few hours there was the Kapua micro client SDK, which doesn’t consume any dependencies other than Paho and Protobuf and also only uses a minimal set of Java 7 functionality.

The second step was to implement the MicroEJ specific APIs and map the calls to the Kapua micro SDK, which wasn’t too difficult either. So now it is possible to simply install the “Kapua Data Channel Provider” from the MicroEJ Community Store. Alternatively you can compile the sources yourself as the code for this adapter is also on GitHub. Once the data channel provider is installed you can fire up any application consuming the DataChannel API, like the “Thermocloud” application, and publish data to Kapua. Please be sure to follow the installation instructions on the Kapua data channel provider for configuring the connection to your Kapua instance.

Kapua Data Channel Provider
Data from Thermocloud in Kapua

As the micro client is capable of running on Java 7, it might also be a choice for people wanting to connect from Android to Kapua without the need to go for Java 8. As Java 8 on Android still seems to be rather painful, this could be an option.

Also see:

I would like to thank Laurent and Frédéric from MicroEJ, who did help me fix all the noob-issues I had.

GSoC with Kapua – Just peeking

The deadline for the evaluations of phase for Google Summer of Code are pretty close. So it was time to take another look at what Arthur did with the logistics simulation for Eclipse Kapua.

The first phase was focused on creating a simple logistic network simulation and pushing telemetry data to Eclipse Kapua. The data isn’t realistic and that is ok, since this never was a requirement for the simulation. The second phase was focused around creating a simple dashboard for visualizing the data generated by the simulator.

So let’s have a quick look:

Now, as you can see, trains driving through oceans isn’t that realistic 😉 But we agreed on not taking things like this under consideration. Companies and locations are
randomly generated and figuring out which vehicles could drive on which roads between locations would simple be too much for this project.

The dashboard component is implemented using Dart and simply because it was considered cool 😉

There are still a few rough edges and the next phase will be focused on cleaning things up and making it consumable by people to play around with it.

Just a bit of Apache Camel

Sometimes you write something and then you nearly forget that you did it … although it is quite handy sometimes, here are a few lines of Apache Camel XML running in Eclipse Kura:

I just wanted to publish some random data from Kura to Kapua, without the need to code, deploy or build anything. Camel came to the rescue:

<routes xmlns="http://camel.apache.org/schema/spring">
  <route id="route1">

    <from uri="timer:1"/>
    <setBody><simple>${bean:payloadFactory.create("value", ${random(100)})}</simple></setBody>
    <to uri="kura-cloud:myapp/topic"/>

  </route>
</routes>

Dropping this snippet into the default XML Camel router:

  • Registers a Kura application named myapp
  • Creates a random number between 0 and 100 every second
  • Converts this to the Kura Payload structure
  • And publishes it on the topic topic

Talking to the cloud

While working on Eclipse Kapua, I wanted to do different tests, pushing telemetry data into the system. So I started to work on the Kura simulator, which can used to simulator an Eclipse Kura IoT gateway in a plain Java project, no special setup required. Now that helped a lot for unit testing and scale testing. Even generating a few simple telemetry data streams for simulating data works out of the box.

But then again I wanted to have something more lightweight and controllable. With the simulator you actually derive some a simple class and get fully controlled by the simulator framework. That may work well in some cases, but in others you may want to turn over the control to the actual application. Assume you already have a component which is “in charge” of your data, and now you want to push this into the cloud. Of course you can do this somehow, working around that. But creating a nice API for that, which is simple and easy to understand is way more fun 😉

So here is my take on a Gateway Client API, sending IoT data to the cloud, consuming command & control from it.

Intentions

I wanted to have a simple API, easy to understand, readable. Preventing you from making mistakes in the first place. And if something goes wrong, it should go wrong right away. Currently we go with MQTT, but there would be an option to go with HTTP as well, or AMQP in the future. And also for MQTT we have Eclipse Paho and FUSE MQTT. Both should be available, both may have special properties, but share some common ground. So implementing new providers should be possible, while sharing code should be easy as well.

Example

Now here is with what I came up with:

try (Client client = KuraMqttProfile.newProfile(FuseClient.Builder::new)
  .accountName("kapua-sys")
  .clientId("foo-bar-1")
  .brokerUrl("tcp://localhost:1883")
  .credentials(userAndPassword("kapua-broker", "kapua-password"))
  .build()) {

  try (Application application = client.buildApplication("app1").build()) {

    // subscribe to a topic

    application.data(Topic.of("my", "receiver")).subscribe(message -> {
      System.out.format("Received: %s%n", message);
    });

    // cache sender instance

    Sender sender = application
      .data(Topic.of("my", "sender"))
      .errors(ignore());

    int i = 0;
    while (true) {
      // send
      sender.send(Payload.of("counter", i++));
      Thread.sleep(1000);
    }
  }
}

Looks pretty simple right? On the background the MQTT connection is managed, payload gets encoded, birth certificates get exchanges and subscriptions get managed. But still the main application is in control of the data flow.

How to do this at home

If you want to have a look at the code, it is available on GitHub (ctron/kapua-gateway-client) and ready to consume on Maven Central (de.dentrassi.kapua). But please be aware of the fact that this is a proof-of-concept, and may never become more than that.

Simply adding the following dependency to your project should be enough:

<dependency>
  <groupId>de.dentrassi.kapua</groupId>
  <artifactId>kapua-gateway-client-provider-mqtt-fuse</artifactId>
  <version>0.2.0</version> <!-- check for a more recent version -->
</dependency>

With this dependency you can use the example above. If you want to got for Paho instead of FUSE use kapua-gateway-client-provider-mqtt-paho instead.

Taking for a test drive

Now taking this for a test drive as even more fun. Eclipse SmartHome has the concept of a persistence system, where telemetry data gets stored in a time series like database. There exists a default implementation for rrdb4j. So re-implementing this interface for Kapua was quite easy and resulted in an example module which can be installed into the Karaf based OpenHAB 2 distribution with just a few commands:

openhab> repo-add mvn:de.dentrassi.kapua/karaf/0.2.0/xml/features
openhab> feature:install eclipse-smarthome-kapua-persistence

Then you need to re-configure the component over the “Paper UI” and point it towards your Kapua setup. Maybe you will need to tweak the “kapua.persist” file in order to define what gets persisted and when. And if everything goes well, your temperate readings will get pushed from SmartHome to Kapua.

More information

Google Summer of Code 2017 with Eclipse Kapua

I am happy to announce that Eclipse Kapua got two slots in this year’s Google Summer of Code. Yes, two projects got accepted, and both are for the Eclipse Kapua project.

Anastasiya Lazarenko will provide a simulation of a fish tank and Arthur Deschamps will go for a supply chain simulation. Both simulations are planned to feed in their data into Eclipse Kapua using the Kura simulator framework. Although both projects seem to be quite similar from a high level perspective, I think they are quite different when it comes to the details.

The basic idea is not to provide something like a statistically/physically/… valid simulation, but something to play around and interact with. Spinning up a few virtual instances of both models and hooking them up to our cloud based IoT solution and interact a bit with them, getting some reasonable feedback values.

For Kapua this will definitely mean evolving the simulator framework based on the feedback from both students, making it (hopefully) easier to use for other tasks. And maybe, just maybe, we can also got for the extra mile and make the same simulations available for Eclipse Hono.

If you want to read more about Anastasiya and Arthur just read through their introductions on kapua-dev@eclipse.org and give them a warm welcome:

read Anastasiya’s introduction
read Arthur’s introduction

Best of luck to you!

OPC UA with Apache Camel

Apache Camel 2.19.0 is close to is release and the OPC UA component called “camel-milo” will be part of it. This is my Eclipse Milo backed component which was previously hosted in my personal GitHub repository ctron/de.dentrassi.camel.milo. It now got accepted into Apache Camel and will be part of the 2.19.0 release. As there are already a release candidates available, I think it is a great time to give a short introduction.

In a nutshell OPC UA is an industrial IoT communication protocol for acquiring telemetry data and command and control of industrial grade automation systems. It is also known as IEC 62541.

The Camel Milo component offers both an OPC UA client (milo-client) and server (milo-server) endpoint.

Running an OPC UA server

The following Camel example is based on Camel Blueprint and provides some random data over OPC UA, acting as a server:

Example project layout
Example project layout

The blueprint configuration would be:

<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="
	http://www.osgi.org/xmlns/blueprint/v1.0.0 https://osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd
	http://camel.apache.org/schema/blueprint https://camel.apache.org/schema/blueprint/camel-blueprint.xsd
	">

	<bean id="milo-server"
		class="org.apache.camel.component.milo.server.MiloServerComponent">
		<property name="enableAnonymousAuthentication" value="true" />
	</bean>

	<camelContext xmlns="http://camel.apache.org/schema/blueprint">
		<route>
			<from uri="timer:test" />
			<setBody>
				<simple>random(0,100)</simple>
			</setBody>
			<to uri="milo-server:test-item" />
		</route>
	</camelContext>

</blueprint>

And adding the following Maven build configuration:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

	<modelVersion>4.0.0</modelVersion>

	<groupId>de.dentrassi.camel.milo</groupId>
	<artifactId>example1</artifactId>
	<version>0.0.1-SNAPSHOT</version>
	<packaging>bundle</packaging>

	<properties>
		<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
		<camel.version>2.19.0</camel.version>
	</properties>

	<dependencies>
		<dependency>
			<groupId>ch.qos.logback</groupId>
			<artifactId>logback-classic</artifactId>
			<version>1.2.1</version>
		</dependency>
		<dependency>
			<groupId>org.apache.camel</groupId>
			<artifactId>camel-core-osgi</artifactId>
			<version>${camel.version}</version>
		</dependency>
		<dependency>
			<groupId>org.apache.camel</groupId>
			<artifactId>camel-milo</artifactId>
			<version>${camel.version}</version>
		</dependency>
	</dependencies>

	<build>
		<plugins>
			<plugin>
				<groupId>org.apache.felix</groupId>
				<artifactId>maven-bundle-plugin</artifactId>
				<version>3.3.0</version>
				<extensions>true</extensions>
			</plugin>
			<plugin>
				<groupId>org.apache.camel</groupId>
				<artifactId>camel-maven-plugin</artifactId>
				<version>${camel.version}</version>
			</plugin>
		</plugins>
	</build>

</project>

This allows you to simply run the OPC UA server with:

mvn package camel:run

Afterwards you can connect with the OPC UA client of your choice and subscribe to the item test-item, receiving that random number.

Release candidate

As this is currently the release candidate of Camel 2.19.0, it is necessary to add the release candidate Maven repository to the pom.xml. I did omit
this in the example above, as this will no longer be necessary when Camel 2.19.0 is released:

<repositories>
		<repository>
			<id>camel</id>
			<url>https://repository.apache.org/content/repositories/orgapachecamel-1073/</url>
		</repository>
	</repositories>

	<pluginRepositories>
		<pluginRepository>
			<id>camel</id>
			<url>https://repository.apache.org/content/repositories/orgapachecamel-1073/</url>
		</pluginRepository>
	</pluginRepositories>

It may also be that the URLs (marked above) will change as a new release candidate gets built. In this case it is necessary that you update the URLs to the appropriate
repository URL.

What’s next?

Once Camel 2.19.0 is released, I will also mark my old, personal GitHub repository as deprecated and point people towards this new component.

And of course I am happy to get some feedback and suggestions.

Simulating telemetry streams with Kapua and OpenShift

Sometimes it is necessary to have some simulated data instead of fancy sensors attached to your IoT setup. As Eclipse Kapua starts to adopt Elasticsearch, it started to seem necessary to actually unit test the inbound telemetry stream of Kapua. Data coming from the gateway, being processed by Kapua, then stored into Elasticsearch and then retrieved back from Elasticsearch over the Kapua REST API. A lot can go wrong here 😉

The Kura simulator, which is now hosted in the Kapua repository, seemed to be right place to do this. That way we can not only test this inside Kapua, but we can also allow different use cases for simulating data streams outside of unit tests and we can leverage the existing OpenShift integration of the Kura Simulator.

The Kura simulator has the ability now to also send telemetry data. In addition to that there is a rather simple simulation model which can use existing value generators and map those to a more complex metric setup.

From a programmatic perspective creating a simple telemetry stream would look this:

GatewayConfiguration configuration = new GatewayConfiguration("tcp://kapua-broker:kapua-password@localhost:1883", "kapua-sys", "sim-1");
try (GeneratorScheduler scheduler = new GeneratorScheduler(Duration.ofSeconds(1))) {
  Set apps = new HashSet<>();
  apps.add(simpleDataApplication("data-1", scheduler, "sine", sine(ofSeconds(120), 100, 0, null)));
  try (MqttAsyncTransport transport = new MqttAsyncTransport(configuration);
       Simulator simulator = new Simulator(configuration, transport, apps);) {
      Thread.sleep(Long.MAX_VALUE);
  }
}

The Generators.simpleDataApplication creates a new Application from the provided map of functions (Map<String,Function<Instant,?>>). This is a very simple application, which reports a single metric on a single topic. The Generators.sine function returns a function which creates a sine curve using the provided parameters.

Now one might ask, why is this a Function<Instant,?>, wouldn’t a simple Supplier be enough? There is a good reason for that. The expectation of the data simulator is actually that the telemetry data is derived from the provided timestamp. This is done in order to generate predictable timestamp and values along the communication path. In this example we only have a single metric in a single instance. But it is possible to scale up the simulation to run 100 instances on 100 pods in OpenShift. In this case each simulation step in one JVM would receive the same timestamp and this each of those 100 instances should generate the same values. Sending the same timestamps upwards to Kapua. Now validating this information later on because quite easy, as you not only can measure the time delay of the transmission, but also check if there are inconsistencies in the data, gaps or other issues.

When using the SimulationRunner, it is possible to configure data generators instead of coding:

{
 "applications": {
  "example1": {
   "scheduler": { "period": 1000 },
   "topics": {
    "t1/data": {
     "positionGenerator": "spos",
     "metrics": {
      "temp1": { "generator": "sine1", "name": "value" },
      "temp2": { "generator": "sine2", "name": "value" }
     }
    },
    "t2/data": {
     "metrics": {
      "temp1": { "generator": "sine1", "name": "value" },
      "temp2": { "generator": "sine2", "name": "value" }
     }
    }
   },
   "generators": {
    "sine1": {
     "type": "sine", "period": 60000, "offset": 50, "amplitude": 100
    },
    "sine2": {
     "type": "sine", "period": 120000, "shift": 45.5, "offset": 30, "amplitude": 100
    },
    "spos": {
     "type": "spos"
    }
   }
  }
 }
}

For more details about this model see: Simple simulation model in the Kapua User Manual.

And of course this can also be managed with the OpenShift setup. Loading a JSON file works like this:

oc create configmap data-simulator-config --from-file=KSIM_SIMULATION_CONFIGURATION=../src/test/resources/example1.json
oc set env --from=configmap/data-simulator-config dc/simulator

Finally it is now possible to visually inspect this data with Grafana, directly accessing the Elasticsearch storage: