The ConPanion

While preparing for EclipseCon Europe 2015 (a few weeks back I have to say, it was a great conference) I again wanted to a have a small mobile helper. So instead of forgetting (again) about it, this time I briefly wrote it down. So here it is.

Going to a conference I do want to have the schedule in advance, I want a nicely rendered view on my mobile phone, offline(!), making plans which talks to attend while riding public transportation. Clicking together a plan. Now for this to work, the conference itself has to provide some sort of data set to make this happen. I really don’t want to have a specific mobile app for each conference. Also adding all the features I do have in mind would crash all budgets which might be available for a single conference. No, the basic idea is to make one tool and let the conference publish the information itself. So the tool simply picks up this … let’s say, XML file, containing all the information necessary for the mobile helper (ok, use JSON if you like that better).

So the effort for the user is to download the app (once) and the conference simply has to provide (and update) an XML file. This is “Tier 1” or the “Free Tier”.

Now we do have two additional tiers (groups of services) which could be used to bring in money, but in any way they will cost money for hosting.

The first group of extensions is some sort of default, additional services around the conference. Like ad-hoc meetings for example. Type a hashtag and create a new ad-hoc meeting and gather a interested persons to have a chat (an easier version of EclipseCon’s BoFs). This requires some backend service, to money has to be spent and in the other way round it has to be earned. Of course it would also be possible to offer car rentals, hotel reservations etc.

The second group of extensions to the app would be the service to host the content for the conventions. Not all conventions are softare developer conferences (at least I heard that), so maybe the conference does not want to fiddle around with the XML data set itself, but have a beautifully designed web UI which does all the magic.

I did write up the ideas in a PDF file, but don’t expect too much additional information. This is the basic idea.

Read the PDF: ConPanion.pdf

You left your phone!

Dear Google,

Today it happened again. I left my mobile phone at home. I realized that before entering the train, so I went back, grabbed it and went back to catch the next train. This gave me a few minutes to think about that πŸ˜‰

You got Android Wear/Watch and already have the smart unlock functionality where your watch (if it is near your phone) can unlock it.

Now you also could detect the opposite scenario, my watch detects it lost the “visibility” of the my mobile phone, so an alarm could pop up on my watch, notifying me about the fact that it left my phone. I can dismiss the alarm, and decide if I go back.

This might cause a few false alarms, but with a small set of additional rules (like “only do this at home/work”, only do this around 8 am and 6 pm) these could be reduced quite a bit.

Please let me know when you are ready.

Kind regards


PS: Actually I do need to buy a watch for that …

Java 8 streaming – or not?

One of the most advertised use cases of the new lambdas in Java 8 is the possibility to stream collections and transform it. The “series of tubes” has a lot of examples on how to do this, I just wanted to look at it from a different perspective, readability.

So starting with a real-life problem of a map Map<ResultKey, List> result which I want to transform into a Set<String>.

Before Java 8, I had something like:

Set<String> ids = new HashSet<> ();
for ( List<ResultEntry> list : result.values () ) {
  for ( ResultEntry entry : list ) {
    if ( entry.getAction () == Action.DELETE ) {
      String id = entry.getArtifact ().getId ();
      ids.add ( id );

Now, with Java 8, I can do:

Set<String> deleteSet = result.values ().stream ()
  .flatMap ( list -> () )
    .filter ( entry -> entry.getAction () == Action.DELETE )
    .map ( entry -> entry.getArtifact ().getId () )
      .collect ( Collectors.toSet () );
context.deleteArtifacts ( deleteSet );

Neither one is shorter nor seems less complex from an initial view. So, which one is better?

Searching for lyrics in Google Music

google music logoDear Google,

as a subscriber of Google Music I often search for music I listened to in the past. In many cases I do know the title, artist or sometime the name of the album.

But sometimes I just remember a fragment of the lyrics. However, searching for a fragment from the lyrics brings up nothing useful in Google Music.

So please Google, allow me to search not only for meta data like title, artist or album, but also for lyrics.

And while you are at it, please also let me limit the search afterwards, like a time period (the 90s), a genre (Rock), a language (some German bands do use English album title, but produce German lyrics), a country of origin (yes, German bands are capable of singing English lyrics).

You are the search giant … aren’t you?

Calling from Google search

Dear Google …

Bildschirmfoto 2015-08-15 um 17.18.49

… I just did a search for some business an Google and actually the first hit and the suggestion box on the right side is the perfect match for what I was looking for. The info box contains all the relevant information, including the phone number, which is a clickable link.

Now I did the search on the desktop computer and not on my mobile phone. Clicking the link automatically starts Google Hangout with a call to that telephone number, without asking for confirmation. Too bad that the computer does not have a microphone attached.

So please…

… instead of dialing from my desktop computer, with Hangout, I would rather want to see a small confirmation box, which (once pressed “OK” or “Yes” or whatever) sends a message to my mobile phone, triggering the call to this telephone number.

Since I was already logged in with my Google Account when doing the search, there should be no problem detecting my Google registered Android phone, and sending a Google Cloud 2 Device Message which triggers the call. You can do the same using “Chrome to Phone” for web pages.

Hopefully you can add this in a future release πŸ˜‰

Maven Tycho/JGit based build timestamps and the “target” directory

Now when you build OSGi bundles using Maven Tycho, you probably ran into the issue of creating a meaningful version qualifier (remember, an OSGi versions always is major.minor.micro.qualifier, so no dash and definitely no -SNAPSHOT).

There are a few approaches ranging from fully manual assignment of the build qualifier, simple timestamps and timestamps based on the last Git change.

The background

The latter one is described in the “Reproducible Version Qualifiers” wiki page of Tycho as a recipe to create the same qualifier from the same source code revision.

Actually the idea is pretty simple, so instead of the current timestamp, the last relevant change in the git repository, for the directory of the bundle, is located and then used to generate the timestamp based qualifier.

As a side note: Personally I came to the conclusion, that this sounds great in the beginning, but turns out to be troublesome later. First if all, the Build Qualifier plugin conflicts with the Source Ref Plugin, which generates a different manifest. Both plugins find different last commits and therefore a different MANIFEST.MF gets generated. So two builds produce two bundles, with the same qualifier, but actually (due to the MANIFEST.MF) different content, with two different checksums, which causes issues later on and has to be cleaned up by some baseline repository matching. In addition you simple cannot guarantee that two different builds come to the same result. Too many components (actually Maven and the local host) are outside of the source code repository and still influence the output of the build. But this post is about the JGit based timestamps πŸ˜‰

A simple configuration using the Git based approach looks like this in the parent pom file:


As you can see, there is a configuration property jgit.ignore which allows to exclude a set of files in the search of the last relevant commit. So git changes, which are only changing files which are ignored, are also ignored in this search for the last modification timestamp. Since the pom.xml will probably just get changed to point to a different parent POM, this seems like a good idea.

The problem

Now what does happen happen, when there are uncommitted changes in the working tree? Then it would not be possible for the build to determine the last relevant commit, since the change is not committed! Maven Tycho does provide a way to handle this (aka “Dirty working tree behaviour”) and will allow you to ignore this. Which might not be a good idea after all. The default behavior is to simply error and fail the build.

For me it became a real annoyance when it complained about the “target” directory itself. The truth is, this output directory should be added to the “.gitignore” file anyway, which would then also be respected by the git based build timestamp provider. But then again it should not fail the build just because of that.


But the solution to that was rather trivial. The jgit.ignore property follows the git ignore syntax and also allows to specify directories:


There are two things which have to be kept in mind: each entry goes to a new line, the root of the evaluation seems no the be the root of the project, so using “/target/” (compared to “target/“) does not work.

Safer surfing for kids – My wishlist

At some point my son will start surfing … the web. Now as with all other things, I’d like to protect him, but I also know, there is nothing like a 100% security, neither in real life, nor in the internet. The main task for me, as a parent, it so prepare him. I’d like a bit of technical help though. Here is my list of wishes.

I know that this is a troublesome topic, and several approaches have already been tried. Some people in Germany came up with ideas limiting child unfriendly content to times after 10pm (like for TV channels), or prohibit entrance to those sites (like shops in the “real world”). And while is do understand the idea of actually restricting access I personally think that it is more the parents task to explain the situation, rather than to blame others when something went wrong. But on the other side, I also do think that children should be prevented from stumbling in to something by accident, which is not suited for them.

The main thing most people calling for limits on content on this topic do wrong is that they compare the internet with the real world. The fact is, that between a website and the browser on the other side, nothing is sure. If you do want to buy alcohol and you do look “too young”, you will get asked for your id/passport. Easy. But which browser session looks “too young”, which “id” is actually shown? The internet just does not work that way.

So what can be done?

Simple filtering and limiting all and everything is a crappy solution. And, looking at Great Britain, for example easily tells you what can go wrong. First everybody who feared that this could be used for censoring content was put up as an idiot, and then websites got blocked which had nothing to do with “unsuitable” content.

So what do I want? That is easy, I want to put up limits. Me, as a parent. So if I decide “no naked breasts” I make this decision, and I don’t want Facebook to make that decision for me, or my son. So people providing content to need to categorize it, flag it with tags which might indicate offensive content. Impossible? This is done every day for video games, books and movies. And even there you can see the “full control” idea failing, which some people have. Sure, some movies are not sold to children because of age restrictions, but parents easily can circumvent that. So again, parents are making these decisions, so let them!

And, these categorizations have to be on a “best effort” basis, and non binding, if you do want to get content providers into this. If you make the providers of content in the web responsible for not flagging content, everybody will step back and fight the system, because everybody working with web services and content knows that the web really is a web. Content is integrated from difference sources, and somebody might just write the word “fuck” in a comment for a USB hub he bought on Amazon, still Amazon should not be made reliable for not flagging its store, or this page, as “unsuitable for children” or “hate speech”, this will simply not work.

Second, allow me to set up my browser, or the browser of my son, to reject content with specific flags. This can also be done in “incognito mode”, no harm done. When content comes back, categories (provided vs blocked) are evaluated and the page is shown or not.

How to get there

So this would basically be enough. But now, who has an interest to actually limit their audience. Because this is what a web site owner actually does. First you have the effort of categorizing your website, and next your audience is getting smaller because of just that.

So making this as easy as possible from a technical perspective is a must. As is the fact that content providers must not be made liable for glitches in their categorizations. It is about the core content of the web site, not details. In a social media website there might turn up naked people, but as long as this is not a website about naked people, it is a glitch in the system.

Give web site owners who categorize their content a benefit over others. Now who can do this? Easy, search engines! If you want more traffic than others, categorize your content!

Which adds another benefit: Search engine results can be marked if they are suitable for your browser setup or not. If the browser does send its category permissions to the web site it is visiting, the web site _can_ evaluate this (still the browser has to enforce this), but also the search engine can visually mark content as “not suitable”. Again, filtering this out, is not a good idea. You will never find this content. If something goes wrong, you will never have a talk with your child about why this was flagged wrong, or why this may be an exception to the rules you put up! Your child will learn nothing other than: this is a bad and broken system!

How to make this easy?

Every page you request will receive your setup in an HTTP header: Rejected-Content-Categories: violence, alcohol, That should be easy, as long as there is a list of well-known categories.

Every response will deliver as similar HTTP response header: Content-Categories: violence, nudity

Again, very easy. The omission of this header just shows that the categorization process is not being performed. Which brings as back to were we are now. So it is not worse.

For the main HTTP request this is being evaluated, for sub-requests (like JavaScript, CSS) this is not. So the header has not to be sent there. But doesn’t that open a loophole where you could load content using Ajax requests and inject this … again, this is a “best-effort” idea, which tries to prevent your child from “stumbling” into this content. If your child wan’t so see naked people, it will probably find a way to do so. Talk with it first! This helps more than any technical barrier!

So in order to not put this header into every reply, it could be sufficient to add this content to a “categories.txt” file in the root of your domain, or into a DNS record if your domain. This would allow to manually categorize your content if your web site software does not support this. This will be evaluated in the order http header, then categories.txt, then DNS record.

Impossible? This is currently done by SPF for detecting spam or you have the “robots.txt” for your content.

So what would be have?

  • The browser tells the other side (the website) what it rejects by sending the HTTP header Rejected-Content-Categories. The web site may use this to mark content, reject the request or issue a warning.
  • Web sites which want to take part of this either reply with an HTTP header Content-Categories, or place this into their “categories.txt”, or add it as a TXT record to their DNS zone.
  • The browser prevents displaying content which is considered “rejected” on its own.
  • This would by a set of small changes, drastically improving the situation.

What needs to be done?

  • Create and standardize a list of categories, not too big and easy to understand
  • Browsers (including mobile browsers!) would need to be changed to adapt this behavior, could also be done as a plug-in
  • Web site owners would need to categorize their content
  • Search engines would need to actually evaluate the content and make people aware of this

The sad part

Of course nobody will do this “on their own”. For a critical mass, you would need one mainstream browser, a big search engine, and a few sites which step in.

So if you feel like doing this, let me know, I will help!


One thing I did not mention was that it is on purpose not to use age restrictions. Again, the internet does not work that way. While in Bavaria the topic of “alcohol” might be considered trivial, in the US it might be a topic which is allowed for people 21+. So you simply cannot group restrictions by age. It might be possible to create a default set of rejection categories based on countries and age, but this is something which could be used as a default setting in the browser.

A bit of Java 8 Optional

For me Java 8 wasn’t a big deal … until I had to go back to Java 7. Suddenly I started missing things I started using without even realizing it. Here comes Optional<T>:

Assume we have some sort of class (Provider) which does something and has a “getName” method. Now we also have a method in a class managing providers which returns the provider by id, so we pass in a string ID and get back a provider:

static class Provider {
    String getName () {
        return "bar";

static Provider getProvider ( final String id ) {
    if ( "foo".equals ( id ) ) {
        return new Provider ();
    return null;

In this simple example the manager only knows the provider “foo”, which will return “bar” as its name. All requests for other providers will return null. A real life scenario might have a Map, which also returns null in case of a missing element.

Now a pretty common code snippet before Java 8 would look like this:

final Provider provider = getProvider ( "bar" );
final String value;
if ( provider != null ) {
    value = provider.getName ();
} else {
    value = null;
System.out.println ( "Bar (pre8): " + value );

Pretty noisy. So the first step is to use the “Optional” type, and to guarantee that the getProvider method never returns null. So we don’t have to check for it:

static Optional<Provider> getOptionalProvider ( final String id ) {
    return Optional.ofNullable ( getProvider ( id ) );

In this case a new method was added, which simply calls the old one. The next thing is to use…) and Optional.orElse(…) to transform the value and return a default if we don’t have a value.

String value1 = getOptionalProvider ( "foo" )
        .map ( Provider::getName )
        .orElse ( null );
System.out.println ( "Foo: " + value1 );

Pretty simple actually. But still readable and understandable (although some people might disagree on that one πŸ˜‰ ).

So what does happen? First the call to getOptionalProvider will now never return null. If the value itself would be null, it would return an empty Optional but still a class instance. Actually always the same, since there is only one instance of an empty Optional. Next the map method will call the provided expression (longer version would be: value -> value.getName()), but the method will only do this if the Optional is not empty. Otherwise it will return an empty Optional again. So after calling map we either have an Optional<String> with the value of getName(), or again an empty Optional. Calling orElse on this new Optional will either return the value of the Optional or the default value provided, null in this case.

Of course one could argue that internally the same logic happens as with Java 7 and before. But I think that this way, you actually can do a one-liner which is understandable but still does not obstruct the your actual class with to many lines of code just or checking about null.


IAdapterFactory with generics

Now I have been working with the Eclipse platform for quite a while. If you do so, you already might have run into the “adaptable” mechanism the Eclipse platform provides (article on EclipseZone).

The basics

The basic idea is to “cast” one object into the class of another, allowing to step into the process and maybe return a new object instance if casting is not possible, so adapting to the requested interface. This is nothing new, but comes in handy every now and then. Especially since the Eclipse platform allows an “external” adapter mechanism to control this adaption process. Simply assume you do have a class “MyModelDocument”, which is used throughout your Eclipse application. Now somebody selects the UI element, backed by an instance of your class and you want the Eclipse UI to show the properties of your instance in the Eclipse properties view. This is done by an instance of IPropertySource. At first this would mean you need you class to implement IPropertySource and do this for every other aspect you want to add to your model. In addition of implementing the interface you would also aggregate a lot of dependencies in the bundle of your model.

But there is a better way thanks to the adapter framework. First of all your class “MyModelDocument” can use the adapter framework and simply create and adapter class, which has to implement IPropertySource, but is backed by the original instance of your “MyModelDocument” class. Second, you can create a new bundle/plugin which defines an extension point named “org.eclipse.core.runtime.adapters” and implement a class based on IAdapterFactory.


Now a typical implementation of this class in Java 5+ looked like this:

public class MyAdapterFactory implements IAdapterFactory {
    @SuppressWarnings ( "unchecked" )
    public Object getAdapter (
                final Object adaptableObject,
                final Class adapterType ) {
        if ( !(adaptableObject instanceof MyModelDocument) ) {
            return null;

        if ( IPropertySource.class.equals ( adapterType ) ) {
            return new MyModelDocumentPropertySourceAdapter ( adaptableObject );

        return null;

    @SuppressWarnings ( "unchecked" )
    public Class[] getAdapterList () {
        return new Class[] { IPropertySource.class };

Of course the @SuppressWarnings for “unchecked” could be left out, but would trigger a bunch of warnings. The cause simply was that IAdapterFactory did not provide make use of Java 5 generics.

In a recent update of the Eclipse platform this interface has been extended to allow the use of generics, the method Object getAdapter (…) is now <T> T getAdapter(…). While this does not really benefit implementations of the class itself (IMHO), it cleans up the warnings if you do it right πŸ˜‰

Keep in mind that the type parameter <T> is a complete variable thing for the factory itself, since it will allow adapting to any kind if type some other class requests. So you actually will never be able to make a specific substitution for <T>. The return type of getAdapter() will change to T, which requires you to actually cast to T. Which can be done in two ways. Either by casting using:

return (T)new MyModelDocumentPropertySourceAdapter ( adaptableObject );

Which will trigger the next warning right away. Since there is no way to actually do the cast. Type erasure will kill the type information during runtime! The way to work around this has always been in Java to actually pass the type in such situations. Like a in JPA, the IAdapterFactory already has the type information as a parameter an so you can a programmatic cast instead:

return adapterType.cast ( new MyModelDocumentPropertySourceAdapter ( adaptableObject ) );

So the full code would look like:

public class MyAdapterFactory implements IAdapterFactory {
    public <T> T getAdapter (
                final Object adaptableObject,
                final Class<T> adapterType ) {
        if ( !(adaptableObject instanceof MyModelDocument) ) {
            return null;

        if ( IPropertySource.class.equals ( adapterType ) ) {
            return adapterType.cast (
                    new MyModelDocumentPropertySourceAdapter ( adaptableObject )

        return null;

    public Class<?>[] getAdapterList () {
        return new Class<?>[] { IPropertySource.class };

Programmatically adding a host key with JSch

Every now and then you stumble over an issue which seems easy in the beginning, but turns out to be something out of the ordinary.

For example establishing an SSH connection to a Linux server using Java. Of course there is the JSch library, which is also in Eclipse Orbit. So this sounds like an ideal solution when developing with OSGi.

However pretty soon I ran into the case that I did not want to write all host keys into my “known_hosts” file, but would like to provide the host key to each new connection which is being created. And while JSch can do a lot of things, all sample projects somehow assume you are writing a Swing application, with full user interface, re-using all existing SSH options and configuration files.

But I did want to create a server side solution, embedded in OSGi, which allows to store the username, password and hostkey in a server side data store which can then be used to establish the connection.

So initially I got an “com.jcraft.jsch.JSchException: UnknownHostKey” exception. Not very helpful since it only contains a string with the key’s fingerprint instead of the full key. Asking Google for help brings up few solutions like this one on Stackoverflow.

However simply disabling the host key check was not an option. And is not a good idea in most cases.

Gladly JSch allows to programmatically add host keys. Although the approach is rather undocumented. At least it seems that way.

Creating a new Jsch instance allows to specify the location of the host keys, but also allows to add them manually:

String keyString = "....";

JSch jsch = new JSch ();
HostKey hostKey = new HostKey ( info.getHostname (), key );

// parse the key
byte [] key = Base64.getDecoder().decode ( keyString ); // Java 8 Base64 - or any other

// add the host key
jsch.getHostKeyRepository ().add ( hostKey );

Basically this does the trick. The only question is, what exactly is they “keyString”. It is not the fingerprint from the exception and it is not the full line from your known hosts file, just the last segment.

So for example if your “known_hosts” entry is:

|1|DvS0JwyQni+Jqoht2n8BSYQjze4=|zHORICsezHdR1nIYhqsOxrgnUe4= ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAwht8wWW+cqmGJa5KrgfgydvlgHxSmlV+8oSUINSm8ix+wG87jQHz56MeaFf0F3IvxiivfvIUxBGlb05CZC1rCTfinvS7H1ktDIwVUK3gv+SGNYtGGwWbtg+oMXAevpV5pMTvDS7Ue6OUnSXGDbAxcqXBA+ApKCG5oizhyrtzOrU=

Then the “keyString” is:


Problem solved πŸ˜‰