Building RPMs on any platform with Maven

In several occasions I had to build RPM packages for installing software. In the past I mostly did it with a Maven build using the RPM Maven Plugin.

The process is simple: At the end of your build you gather up all resources, try to understand the mapping configuration, bang your head a few times in order to figure out way to work with -SNAPSHOT versions and that’s it. In the end you have a few RPM files.

The only problem is, that the plugin actually creates a spec file and runs the rpmbuild command line tool. Which is, of course, only available on an RPM like system. Fortunately Debian/Ubuntu based distributions, although they use something different, provide at least the rpmbuild tool.

On Windows or Mac OS the situation looks different. Adding rpmbuild to Windows can be quite a task. Still the question remains, why this is necessary since Java can run on all platforms.

So time to write a Maven plugin which does not the rpmbuild tool, but create RPM packages native in Java:

de.dentrassi.maven:rpm is a Maven Plugin which does create RPM packages using plain Java as a Maven Plugin. The process is simply and fast and does not require additional command line tool. The plugin is open source and the source code is available on GitHub ctron/rpm-builder.

Writing RPM files … in plain Java

Now creating an RPM file is easy. There are a lot of tutorials out there on how write a SPEC file and build your RPM. Even when you are using Maven … with the exception that when you are on Windows or Mac OS X, the Maven RPM plugin will still try to invoke rpmbuild in order to actually build the RPM file. The maven bundle simply creates a SPEC file, layout out the payload data and lets rpmbuild do the processing.

My task now was to make it possible for Eclipse NeoSCADA to create configuration RPMs directly from inside the Eclipse IDE (running in Java), without the need to have rpmbuild on a Windows platform. Since I did write an RPM reader for Package Drone before, I did know a bit about the RPM file format. So this shouldn’t be a big deal?! … How naive 😉

(more…)

Eclipse Mattermost – What’s the state?!

A few weeks ago we started to test Mattermost as a communication channel for Eclipse Foundation projects. So, how is it going?

First of all Cédric Brun wrote a bunch of Java tasks which create Mattermost events based on various other Eclipse systems (like Gerrit, Bugzilla, Twitter, Forum, …) which integrate nicely with the Mattermost channels and allow each project to aggregate all these events in a single location. So you actually can get a notification about a new Forum entry for your Eclipse project in Mattermost now. Cool stuff! Many thanks!

From the usage side we are currently approaching the 100 user mark. At the moment of writing there are 94 registered users, 24 public and 8 private channels. About 150 posts per day and between 10 to 20 active users. You can have a look at the statistics below.

So is it a success? Well I goes into the right direction. Please don’t forget that this is just a test, but on the other hand there are people which are using it on a day by day basis for their Eclipse process. The usage of one big team with many channels (even multiple per project) seem to work fine. Single-sign-on will be a topic, right now Mattermost does have its user management. Also is the setup for projects still a manual process, which could be automated somehow. So there are a few topic left to solve.

But for me it is really amazing to see how quickly a new communication platform was established if all people work together. And it seems that people accept the new system quite well. I really hope we can establish this as a permanent solution.

Mattermost Statistics 2016-01-29

Java 8 magic with method references

When you start learning a new programming language you often encounter snippets of code which you have no idea why they work. The more you learn about that programming language to more you understand and these moments become rare.

Today, after programming many years in Java, I ran into such a situation with Java 8 and was really fascinated about it.

It all started with the problem of having a Stream<T> and wanting to “for-each” iterate over its content. The for-each construct in Java requires to have an array or Iterable<T>. However Stream<T> does only provide an Iterator<T>, which is not the same.

Now there are many solutions (good and bad) out there for this problem. However one solution really fascinated me:

Stream<String> s = ...;

for (String v : (Iterable<String>) s::iterator) {
   ...
}

Now wait … Stream<T> does have a method iterator which returns an Iterator<T>. But Iterator<T> cannot be cast to Iterable<T>! And also is “s::iterator” not calling the method, but referencing the method.

Screenshot of Eclipse Quick Fix for Lambda expressions

Pasting this code fragment into the Eclipse IDE helps to understand what actually happens. Pressing Ctrl+1 on a code fragment allows to convert method references to lambda expressions and lambda expressions to anonymous classes. Quite fantastic 😉

So, lets see how this code fragment get expanded to a lambda expression:

for ( final String v : (Iterable<String>)() -> s.iterator () ) {
   ...
}

And this lambda expression is equivalent to:

for ( final String v : new Iterable<String> () {
  public Iterator<String> iterator () {
    return s.iterator ();
  }} ) {
    ...
}

The last snippet is rather bloated, as inner classes have always been in Java.

The magic which is happening is done by Java 8 new features “method references” and the “functional interfaces”. A functional interface is a java interface which only has one method to implement. “default” methods don’t count. Looking at Iterable<T> this is the case. So an Iterable<T> can be implemented with a lambda expression and or method reference. But for the for-each loop, Java does not “know” what you have mind. This is where the cast comes into play. By casting the method reference to Iterable<T>, Java infers that an Iterable<T> is requires, which can be provided by the method reference to Iterator<T>.

But looking at Iterable<T> there is no @FunctionalInterface present?!

That is right. But @FunctionalInterface is not a requirement for actually being a functional interface. It only tells the compiler to fail if the interface is not. So the downside of this example is, that there is no guarantee that Iterable<T> will always stay a functional interface, since the authors have not committed to that using @FunctionalInterface.

In the end, I am not sure if this is a good solution for my original problem. But is still is a fascinating piece of code and a great idea indeed.

Testing Diaspora – Part 3

diasporaThis is the third time I am testing Diaspora. I never wrote about the other attempts, but between Christmas and New Year a had a bit of time writing this together.

Motivated by the article at Heise about diaspora, I decided it is time to give diaspora another try. I did try the first version after the crowdfunding campaign and one or two years later.

For this test is registered at “despora.de” right here: https://despora.de/u/ctron

The first thing I have to say is that it still is a problem to start right away. There are a lot of pods (diaspora servers) running, where you can easily create a new account right away. But then … I did register an account at joindiaspora.org a few years back. Just to find out now that this pod runs on a pretty old version of diaspora and does not accept any new registrations. It looks kind of dead to me. Now I have my account on a pod, I can download my account data. But I cannot migrate my account to a new pod. I have to start from scratch and lost all my social activity. Not that I did much with that account 😉

So in the end you somehow want to be in control over your pod. And having a diaspora ID which contains your own domain name is just another reason for that. But setting up diaspora is a nightmare. Looking at the different ways, you do need a full virtual server or pay at least $15 each month for running your own cloud instance at some cloud provider when using the Bitnami variant of diaspora. Bitnami again has changed diaspora in some ways, so that diaspora themself ask you to look into the Bitnami wiki first. A few other cloud based approaches ask you to fork and edit the diaspora git repository for a start.

I wouldn’t mind paying a few bucks for hosting my own pod, but paying for a full blowing virtual server and setting up things like redis for one or two accounts on this hud is just oversized.

Joining a pre-existing pod on the other hand, seems like a bad idea, unless you really know who is running the pod. In the end you trust your social account and data to somebody you either don’t know and are not sure that they will keep the pod (and thus your social identity) running as long as you like it.

Of course the same is true for Facebook, Google+ and all the others. But diaspora wants to make a change. So setting up a pod or getting an account that you really control must get simpler!

As it looks to me, right now the diaspora software itself targets installations for a bigger number of users. So each pod should by capable of running as many accounts as possible (although diaspora itself is decentralized). And it really is good that there is a protocol between pods which can be implemented in a different way. So there are implementations of the diaspora protocol which are not based on the diaspora source code itself.

But back to the problem of running a pod that you control. Either there is a “light version” of a pod, which supports a few users instead of a thousand, then sharing a virtual server would be much easier. A docker container, an OpenShift instance, some micro instance on Google Cloud or AWS. That would be much cheaper.

At the same time this would allow one to run diaspora in a Raspberry Pi like device at home. If you are only hosting one or two accounts, your local DSL line is pretty much sufficient for running your own pod.

As sad as it is, I guess this means seeing you all for part #4 in a few years. If you have some spare time, I think contributing to diaspora is a great idea, because the idea behind diaspora is great. But right now, it simply is not my cup of tea.

Christmas presents – sell/recycle for charity

Before and on Christmas most people think about people in need, and hopefully donate something. Not necessarily money, but presents as well.

But after Christmas there are lots of unwanted presents and things you don’t need, don’t like or will ever use. In most cases these items end up in the trash, at some shelf or, at best, being returned to the store.

Now just think of a different way. If you could just put those presents up at E-Bay, in a charity special. You start an auction, put up your unwanted presents and select some registered non-profit organization that will received the money. E-bay won’t charge you for the transaction, at least for those charity auctions.

Sounds like a great idea to me.

Test driving “Mattermost” at the Eclipse Foundation

Mattermost Logo Thanks to @bruncedric and the Eclipse Webmasters we were able to quickly start a test of Mattermost at https://mattermost-test.eclipse.org.

“Mattermost” is a Slack/HipChat/… like web messaging system (aka webchat). I don’t want to go into too much detail of the system itself, but the main idea is to have a “faster-than-email” communication form for a team of people. Comparable to IRC, but more HTML5-ish. It also features a REST API, which can be used to automate inbound and outbound messages to the different channels.

Why not Slack or HipChat? Simply because the Eclipse Foundation requires its IT components to be based on open source solutions and not rely on any service which can go away at any moment, without the possibility to rescue your data in a portable format. Which is quite a good approach if you ask me. Just imaging you have years of data and loose it due to fact that your service provider simply shuts down.

So right now there is a Mattermost instance at https://mattermost-test.eclipse.org which is intended to be a setup for testing “Mattermost” and figuring out how it can be used to give Eclipse projects a benefit. Simply adding more technical gimmicks might not always be a good idea.

Also does Package Drone have a channel at mattermost.

So go ahead and give it a test run …

… if you have troubles or ideas … just look at eclipse/mattermost.

Parsing RPMs in Java

RPMThe core idea of Package Drone is to extract meta data from files and generated some sort of repository index. And although Package Drone’s main focus is on OSGi, we did want to implement a YUM repository adapter and for this we needed to extract metadata from RPM files.

Package Drone itself is written in Java. So I wanted some sort of Java approach. Of course it would be possible to run the rpm command in the background, parse the output somehow and gather the meta data information from that. Or make a JNI wrapper to librpm and extract the information with a native library call. However this is not only prone to error, but also a nightmare when it comes to porting.

So I really was looking for a plain Java solution, which also was compatible with the license of Eclipse (EPL). I came across jRPM and redline.

jRPM was last updated around 2005, still has an Apache 1.1 license and simply stuck in the past. redline is more up to date and sounded promising at first, but then the library is really more like a jar file with some “main” entry points and an ant task. There is no clear API for programmatically reading the RPM files. And the legal aspect was a little bit troublesome to me. 28 contributors according to GitHub, no CLA, an “MIT license” from a company simple named “FreeCompany” and the Google Tracking code backed right into the Maven POM file. So, I had to do it myself 😉

A fresh start

So not to fall into the same pitfalls, I did start to write a parsing library first, instead of directly writing the Package Drone extractor module. This way there is now a clean library which can parse RPM files. It also is an OSGi bundle, which was necessary for Package Drone, but does not make use of any OSGi functionality. So it still can be used as a simple JAR file. Licensed under the EPL, as requirement for Eclipse projects anyway and the Eclipse CLA and IP process to take care of the legal aspects. And if you are an Eclipse project, you even don’t need a CQ to use it.

What’s the catch?

I did implement what I needed. And that was reading RPM metadata and building a YUM repository index. So writing RPM files or reading/writing signatures is not possible at the moment. However there are plans to sign YUM repositories and RPM files as well. So this limitation is only a matter of time. Also are there some fields which are not mapped to enums. RPM used numeric IDs internally, many are mapped, but not all. You can still access those, but by number not by enum in that case.

Also is this library currently not on maven central. But again, I am also working an a “deploy to Maven Central” feature in Package Drone, which will clear that blocker.

So where are we now?

So the code is right here on GitHub. In a few weeks we will have a binary download, but right now the Eclipse IP process has to clear the way first.

Looking at the source code of the test case you can see how this library works. Instead of working on a plain File, it can work on an InputStream, which can be important if your RPM file comes from a remote location.

The following example shows how to extract metadata and content from an RPM file.

try(RpmInputStream in = new RpmInputStream(new FileInputStream("file.rpm"))) {
  String name = (String)in.getPayloadHeader ().getTag (RpmTag.NAME);

  CpioArchiveInputStream cpio = in.getCpioStream ();
  CpioArchiveEntry entry;
  while ((entry = cpio.getNextCPIOEntry ()) != null) {
    process ( entry );
  }
}

There is also the older JavaDoc, which has to be updated to reflect the change to org.eclipse.packagedrone.

What’s next?

Of course a binary release at Eclipse, an updated JavaDoc and publishing the binaries to Maven Central. Beside extending the library to be able to sign and write RPM files.

If you like to help or need some help using it, just let me know!

The ConPanion

While preparing for EclipseCon Europe 2015 (a few weeks back I have to say, it was a great conference) I again wanted to a have a small mobile helper. So instead of forgetting (again) about it, this time I briefly wrote it down. So here it is.

Going to a conference I do want to have the schedule in advance, I want a nicely rendered view on my mobile phone, offline(!), making plans which talks to attend while riding public transportation. Clicking together a plan. Now for this to work, the conference itself has to provide some sort of data set to make this happen. I really don’t want to have a specific mobile app for each conference. Also adding all the features I do have in mind would crash all budgets which might be available for a single conference. No, the basic idea is to make one tool and let the conference publish the information itself. So the tool simply picks up this … let’s say, XML file, containing all the information necessary for the mobile helper (ok, use JSON if you like that better).

So the effort for the user is to download the app (once) and the conference simply has to provide (and update) an XML file. This is “Tier 1” or the “Free Tier”.

Now we do have two additional tiers (groups of services) which could be used to bring in money, but in any way they will cost money for hosting.

The first group of extensions is some sort of default, additional services around the conference. Like ad-hoc meetings for example. Type a hashtag and create a new ad-hoc meeting and gather a interested persons to have a chat (an easier version of EclipseCon’s BoFs). This requires some backend service, to money has to be spent and in the other way round it has to be earned. Of course it would also be possible to offer car rentals, hotel reservations etc.

The second group of extensions to the app would be the service to host the content for the conventions. Not all conventions are softare developer conferences (at least I heard that), so maybe the conference does not want to fiddle around with the XML data set itself, but have a beautifully designed web UI which does all the magic.

I did write up the ideas in a PDF file, but don’t expect too much additional information. This is the basic idea.

Read the PDF: ConPanion.pdf