Maven Tycho/JGit based build timestamps and the “target” directory

Now when you build OSGi bundles using Maven Tycho, you probably ran into the issue of creating a meaningful version qualifier (remember, an OSGi versions always is major.minor.micro.qualifier, so no dash and definitely no -SNAPSHOT).

There are a few approaches ranging from fully manual assignment of the build qualifier, simple timestamps and timestamps based on the last Git change.

The background

The latter one is described in the “Reproducible Version Qualifiers” wiki page of Tycho as a recipe to create the same qualifier from the same source code revision.

Actually the idea is pretty simple, so instead of the current timestamp, the last relevant change in the git repository, for the directory of the bundle, is located and then used to generate the timestamp based qualifier.

As a side note: Personally I came to the conclusion, that this sounds great in the beginning, but turns out to be troublesome later. First if all, the Build Qualifier plugin conflicts with the Source Ref Plugin, which generates a different manifest. Both plugins find different last commits and therefore a different MANIFEST.MF gets generated. So two builds produce two bundles, with the same qualifier, but actually (due to the MANIFEST.MF) different content, with two different checksums, which causes issues later on and has to be cleaned up by some baseline repository matching. In addition you simple cannot guarantee that two different builds come to the same result. Too many components (actually Maven and the local host) are outside of the source code repository and still influence the output of the build. But this post is about the JGit based timestamps πŸ˜‰

A simple configuration using the Git based approach looks like this in the parent pom file:

<plugin>
   <groupId>org.eclipse.tycho</groupId>
   <artifactId>tycho-packaging-plugin</artifactId>
   <version>${tycho.version}</version>
   <dependencies>
     <dependency>
       <groupId>org.eclipse.tycho.extras</groupId>
       <artifactId>tycho-buildtimestamp-jgit</artifactId>
       <version>${tycho-extras.version}</version>
     </dependency>
   </dependencies>
   <configuration>
     <timestampProvider>jgit</timestampProvider>
     <jgit.ignore>
       pom.xml
     </jgit.ignore>
   </configuration>
</plugin>

As you can see, there is a configuration property jgit.ignore which allows to exclude a set of files in the search of the last relevant commit. So git changes, which are only changing files which are ignored, are also ignored in this search for the last modification timestamp. Since the pom.xml will probably just get changed to point to a different parent POM, this seems like a good idea.

The problem

Now what does happen happen, when there are uncommitted changes in the working tree? Then it would not be possible for the build to determine the last relevant commit, since the change is not committed! Maven Tycho does provide a way to handle this (aka “Dirty working tree behaviour”) and will allow you to ignore this. Which might not be a good idea after all. The default behavior is to simply error and fail the build.

For me it became a real annoyance when it complained about the “target” directory itself. The truth is, this output directory should be added to the “.gitignore” file anyway, which would then also be respected by the git based build timestamp provider. But then again it should not fail the build just because of that.

Solution

But the solution to that was rather trivial. The jgit.ignore property follows the git ignore syntax and also allows to specify directories:

<jgit.ignore>
  pom.xml
  target/
</jgit.ignore>

There are two things which have to be kept in mind: each entry goes to a new line, the root of the evaluation seems no the be the root of the project, so using “/target/” (compared to “target/“) does not work.

Safer surfing for kids – My wishlist

At some point my son will start surfing … the web. Now as with all other things, I’d like to protect him, but I also know, there is nothing like a 100% security, neither in real life, nor in the internet. The main task for me, as a parent, it so prepare him. I’d like a bit of technical help though. Here is my list of wishes.

I know that this is a troublesome topic, and several approaches have already been tried. Some people in Germany came up with ideas limiting child unfriendly content to times after 10pm (like for TV channels), or prohibit entrance to those sites (like shops in the “real world”). And while is do understand the idea of actually restricting access I personally think that it is more the parents task to explain the situation, rather than to blame others when something went wrong. But on the other side, I also do think that children should be prevented from stumbling in to something by accident, which is not suited for them.

The main thing most people calling for limits on content on this topic do wrong is that they compare the internet with the real world. The fact is, that between a website and the browser on the other side, nothing is sure. If you do want to buy alcohol and you do look “too young”, you will get asked for your id/passport. Easy. But which browser session looks “too young”, which “id” is actually shown? The internet just does not work that way.

So what can be done?

Simple filtering and limiting all and everything is a crappy solution. And, looking at Great Britain, for example easily tells you what can go wrong. First everybody who feared that this could be used for censoring content was put up as an idiot, and then websites got blocked which had nothing to do with “unsuitable” content.

So what do I want? That is easy, I want to put up limits. Me, as a parent. So if I decide “no naked breasts” I make this decision, and I don’t want Facebook to make that decision for me, or my son. So people providing content to need to categorize it, flag it with tags which might indicate offensive content. Impossible? This is done every day for video games, books and movies. And even there you can see the “full control” idea failing, which some people have. Sure, some movies are not sold to children because of age restrictions, but parents easily can circumvent that. So again, parents are making these decisions, so let them!

And, these categorizations have to be on a “best effort” basis, and non binding, if you do want to get content providers into this. If you make the providers of content in the web responsible for not flagging content, everybody will step back and fight the system, because everybody working with web services and content knows that the web really is a web. Content is integrated from difference sources, and somebody might just write the word “fuck” in a comment for a USB hub he bought on Amazon, still Amazon should not be made reliable for not flagging its store, or this page, as “unsuitable for children” or “hate speech”, this will simply not work.

Second, allow me to set up my browser, or the browser of my son, to reject content with specific flags. This can also be done in “incognito mode”, no harm done. When content comes back, categories (provided vs blocked) are evaluated and the page is shown or not.

How to get there

So this would basically be enough. But now, who has an interest to actually limit their audience. Because this is what a web site owner actually does. First you have the effort of categorizing your website, and next your audience is getting smaller because of just that.

So making this as easy as possible from a technical perspective is a must. As is the fact that content providers must not be made liable for glitches in their categorizations. It is about the core content of the web site, not details. In a social media website there might turn up naked people, but as long as this is not a website about naked people, it is a glitch in the system.

Give web site owners who categorize their content a benefit over others. Now who can do this? Easy, search engines! If you want more traffic than others, categorize your content!

Which adds another benefit: Search engine results can be marked if they are suitable for your browser setup or not. If the browser does send its category permissions to the web site it is visiting, the web site _can_ evaluate this (still the browser has to enforce this), but also the search engine can visually mark content as “not suitable”. Again, filtering this out, is not a good idea. You will never find this content. If something goes wrong, you will never have a talk with your child about why this was flagged wrong, or why this may be an exception to the rules you put up! Your child will learn nothing other than: this is a bad and broken system!

How to make this easy?

Every page you request will receive your setup in an HTTP header: Rejected-Content-Categories: violence, alcohol, That should be easy, as long as there is a list of well-known categories.

Every response will deliver as similar HTTP response header: Content-Categories: violence, nudity

Again, very easy. The omission of this header just shows that the categorization process is not being performed. Which brings as back to were we are now. So it is not worse.

For the main HTTP request this is being evaluated, for sub-requests (like JavaScript, CSS) this is not. So the header has not to be sent there. But doesn’t that open a loophole where you could load content using Ajax requests and inject this … again, this is a “best-effort” idea, which tries to prevent your child from “stumbling” into this content. If your child wan’t so see naked people, it will probably find a way to do so. Talk with it first! This helps more than any technical barrier!

So in order to not put this header into every reply, it could be sufficient to add this content to a “categories.txt” file in the root of your domain, or into a DNS record if your domain. This would allow to manually categorize your content if your web site software does not support this. This will be evaluated in the order http header, then categories.txt, then DNS record.

Impossible? This is currently done by SPF for detecting spam or you have the “robots.txt” for your content.

So what would be have?

  • The browser tells the other side (the website) what it rejects by sending the HTTP header Rejected-Content-Categories. The web site may use this to mark content, reject the request or issue a warning.
  • Web sites which want to take part of this either reply with an HTTP header Content-Categories, or place this into their “categories.txt”, or add it as a TXT record to their DNS zone.
  • The browser prevents displaying content which is considered “rejected” on its own.
  • This would by a set of small changes, drastically improving the situation.

What needs to be done?

  • Create and standardize a list of categories, not too big and easy to understand
  • Browsers (including mobile browsers!) would need to be changed to adapt this behavior, could also be done as a plug-in
  • Web site owners would need to categorize their content
  • Search engines would need to actually evaluate the content and make people aware of this

The sad part

Of course nobody will do this “on their own”. For a critical mass, you would need one mainstream browser, a big search engine, and a few sites which step in.

So if you feel like doing this, let me know, I will help!

Update

One thing I did not mention was that it is on purpose not to use age restrictions. Again, the internet does not work that way. While in Bavaria the topic of “alcohol” might be considered trivial, in the US it might be a topic which is allowed for people 21+. So you simply cannot group restrictions by age. It might be possible to create a default set of rejection categories based on countries and age, but this is something which could be used as a default setting in the browser.

A bit of Java 8 Optional

For me Java 8 wasn’t a big deal … until I had to go back to Java 7. Suddenly I started missing things I started using without even realizing it. Here comes Optional<T>:

Assume we have some sort of class (Provider) which does something and has a “getName” method. Now we also have a method in a class managing providers which returns the provider by id, so we pass in a string ID and get back a provider:

static class Provider {
    String getName () {
        return "bar";
    }
}

static Provider getProvider ( final String id ) {
    if ( "foo".equals ( id ) ) {
        return new Provider ();
    }
    return null;
}

In this simple example the manager only knows the provider “foo”, which will return “bar” as its name. All requests for other providers will return null. A real life scenario might have a Map, which also returns null in case of a missing element.

Now a pretty common code snippet before Java 8 would look like this:

final Provider provider = getProvider ( "bar" );
final String value;
if ( provider != null ) {
    value = provider.getName ();
} else {
    value = null;
}
System.out.println ( "Bar (pre8): " + value );

Pretty noisy. So the first step is to use the “Optional” type, and to guarantee that the getProvider method never returns null. So we don’t have to check for it:

static Optional<Provider> getOptionalProvider ( final String id ) {
    return Optional.ofNullable ( getProvider ( id ) );
}

In this case a new method was added, which simply calls the old one. The next thing is to use Optional.map(…) and Optional.orElse(…) to transform the value and return a default if we don’t have a value.

String value1 = getOptionalProvider ( "foo" )
        .map ( Provider::getName )
        .orElse ( null );
System.out.println ( "Foo: " + value1 );

Pretty simple actually. But still readable and understandable (although some people might disagree on that one πŸ˜‰ ).

So what does happen? First the call to getOptionalProvider will now never return null. If the value itself would be null, it would return an empty Optional but still a class instance. Actually always the same, since there is only one instance of an empty Optional. Next the map method will call the provided expression (longer version would be: value -> value.getName()), but the method will only do this if the Optional is not empty. Otherwise it will return an empty Optional again. So after calling map we either have an Optional<String> with the value of getName(), or again an empty Optional. Calling orElse on this new Optional will either return the value of the Optional or the default value provided, null in this case.

Of course one could argue that internally the same logic happens as with Java 7 and before. But I think that this way, you actually can do a one-liner which is understandable but still does not obstruct the your actual class with to many lines of code just or checking about null.

(more…)

IAdapterFactory with generics

Now I have been working with the Eclipse platform for quite a while. If you do so, you already might have run into the “adaptable” mechanism the Eclipse platform provides (article on EclipseZone).

The basics

The basic idea is to “cast” one object into the class of another, allowing to step into the process and maybe return a new object instance if casting is not possible, so adapting to the requested interface. This is nothing new, but comes in handy every now and then. Especially since the Eclipse platform allows an “external” adapter mechanism to control this adaption process. Simply assume you do have a class “MyModelDocument”, which is used throughout your Eclipse application. Now somebody selects the UI element, backed by an instance of your class and you want the Eclipse UI to show the properties of your instance in the Eclipse properties view. This is done by an instance of IPropertySource. At first this would mean you need you class to implement IPropertySource and do this for every other aspect you want to add to your model. In addition of implementing the interface you would also aggregate a lot of dependencies in the bundle of your model.

But there is a better way thanks to the adapter framework. First of all your class “MyModelDocument” can use the adapter framework and simply create and adapter class, which has to implement IPropertySource, but is backed by the original instance of your “MyModelDocument” class. Second, you can create a new bundle/plugin which defines an extension point named “org.eclipse.core.runtime.adapters” and implement a class based on IAdapterFactory.

Generics

Now a typical implementation of this class in Java 5+ looked like this:

public class MyAdapterFactory implements IAdapterFactory {
    @SuppressWarnings ( "unchecked" )
    @Override
    public Object getAdapter (
                final Object adaptableObject,
                final Class adapterType ) {
        
        if ( !(adaptableObject instanceof MyModelDocument) ) {
            return null;
        }

        if ( IPropertySource.class.equals ( adapterType ) ) {
            return new MyModelDocumentPropertySourceAdapter ( adaptableObject );
        }

        return null;
    }

    @SuppressWarnings ( "unchecked" )
    @Override
    public Class[] getAdapterList () {
        return new Class[] { IPropertySource.class };
    }
}

Of course the @SuppressWarnings for “unchecked” could be left out, but would trigger a bunch of warnings. The cause simply was that IAdapterFactory did not provide make use of Java 5 generics.

In a recent update of the Eclipse platform this interface has been extended to allow the use of generics, the method Object getAdapter (…) is now <T> T getAdapter(…). While this does not really benefit implementations of the class itself (IMHO), it cleans up the warnings if you do it right πŸ˜‰

Keep in mind that the type parameter <T> is a complete variable thing for the factory itself, since it will allow adapting to any kind if type some other class requests. So you actually will never be able to make a specific substitution for <T>. The return type of getAdapter() will change to T, which requires you to actually cast to T. Which can be done in two ways. Either by casting using:

return (T)new MyModelDocumentPropertySourceAdapter ( adaptableObject );

Which will trigger the next warning right away. Since there is no way to actually do the cast. Type erasure will kill the type information during runtime! The way to work around this has always been in Java to actually pass the type in such situations. Like a in JPA, the IAdapterFactory already has the type information as a parameter an so you can a programmatic cast instead:

return adapterType.cast ( new MyModelDocumentPropertySourceAdapter ( adaptableObject ) );

So the full code would look like:

public class MyAdapterFactory implements IAdapterFactory {
    @Override
    public <T> T getAdapter (
                final Object adaptableObject,
                final Class<T> adapterType ) {
        
        if ( !(adaptableObject instanceof MyModelDocument) ) {
            return null;
        }

        if ( IPropertySource.class.equals ( adapterType ) ) {
            return adapterType.cast (
                    new MyModelDocumentPropertySourceAdapter ( adaptableObject )
               );
        }

        return null;
    }

    @Override
    public Class<?>[] getAdapterList () {
        return new Class<?>[] { IPropertySource.class };
    }
}

Programmatically adding a host key with JSch

Every now and then you stumble over an issue which seems easy in the beginning, but turns out to be something out of the ordinary.

For example establishing an SSH connection to a Linux server using Java. Of course there is the JSch library, which is also in Eclipse Orbit. So this sounds like an ideal solution when developing with OSGi.

However pretty soon I ran into the case that I did not want to write all host keys into my “known_hosts” file, but would like to provide the host key to each new connection which is being created. And while JSch can do a lot of things, all sample projects somehow assume you are writing a Swing application, with full user interface, re-using all existing SSH options and configuration files.

But I did want to create a server side solution, embedded in OSGi, which allows to store the username, password and hostkey in a server side data store which can then be used to establish the connection.

So initially I got an “com.jcraft.jsch.JSchException: UnknownHostKey” exception. Not very helpful since it only contains a string with the key’s fingerprint instead of the full key. Asking Google for help brings up few solutions like this one on Stackoverflow.

However simply disabling the host key check was not an option. And is not a good idea in most cases.

Gladly JSch allows to programmatically add host keys. Although the approach is rather undocumented. At least it seems that way.

Creating a new Jsch instance allows to specify the location of the host keys, but also allows to add them manually:

String keyString = "....";

JSch jsch = new JSch ();
HostKey hostKey = new HostKey ( info.getHostname (), key );

// parse the key
byte [] key = Base64.getDecoder().decode ( keyString ); // Java 8 Base64 - or any other

// add the host key
jsch.getHostKeyRepository ().add ( hostKey );

Basically this does the trick. The only question is, what exactly is they “keyString”. It is not the fingerprint from the exception and it is not the full line from your known hosts file, just the last segment.

So for example if your “known_hosts” entry is:

|1|DvS0JwyQni+Jqoht2n8BSYQjze4=|zHORICsezHdR1nIYhqsOxrgnUe4= ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAwht8wWW+cqmGJa5KrgfgydvlgHxSmlV+8oSUINSm8ix+wG87jQHz56MeaFf0F3IvxiivfvIUxBGlb05CZC1rCTfinvS7H1ktDIwVUK3gv+SGNYtGGwWbtg+oMXAevpV5pMTvDS7Ue6OUnSXGDbAxcqXBA+ApKCG5oizhyrtzOrU=

Then the “keyString” is:

AAAAB3NzaC1yc2EAAAABIwAAAIEAwht8wWW+cqmGJa5KrgfgydvlgHxSmlV+8oSUINSm8ix+wG87jQHz56MeaFf0F3IvxiivfvIUxBGlb05CZC1rCTfinvS7H1ktDIwVUK3gv+SGNYtGGwWbtg+oMXAevpV5pMTvDS7Ue6OUnSXGDbAxcqXBA+ApKCG5oizhyrtzOrU=

Problem solved πŸ˜‰

The curated Play Store / App Store

Dear Google,

every now and then I come up with idea of buying another game in the play store (yes, a mobile game, for Android). So I browse and browse and finally give up again. The main entry views give me a few promoted items, which mostly are uninteresting as a game, the top lists show games which I already checked out the last time, or already bought, the recommendation show hardly any interesting games. The same is true for non-game apps (I wouldn’t always call them productive apps πŸ˜‰ ).

Even more funny is the rating for normal apps. I do understand that showing more games of the same type is a good idea when I gave 5 stars to the game in question, but when I tell you that I found a (e.g.) file manager that I really like, the least thing I want is another list of similar file managers. I just told you I found a good one! So in the case of normal app, giving 1 star should trigger the “show others” list. But this is only a side quest.

If you got some music subscription service like Spotify, Google Music All Access or whatever you like else, then you pretty soon find out that there is so much music out there, you sometimes wish for somebody to tell you: “Listen to this one”. Google Music does that by suggesting you songs and “radio stations”. But really those radio stations repeat themselves after a few hours of listening. So it seems there is not much AI behind them, probably just a shuffled playlist.

The bright side to this is, both Spotify and Google Music (probably others as well) do have something like “shared playlists”. So once you found somebody you share the same taste, you will probably find more games and apps you like. However, just considering “all apps of this person” a good match is not a good idea. Just consider the fact that some people install all kind of rubbish. On the other hand some people really do have a talent to curate a few play lists.

Now here is what would be a great idea. Curated are there in Youtube (aka Channels, or Playlists), in Google Music (shared playlists), in Spotify (again shared playlists). So instead of letting people search for “best games android 2015″, give people a chance to:

  • Create playlists for apps/games
  • Let them embed the playlists in their homepages
  • Maybe even give them a share on purchased items

It’s easy! For games they could even be called “play lists” πŸ˜‰

PS: I guess the same is still valid for the Apple App Store

OSGi + JSP + JSTL

What is so easy with a standard JEE setup becomes quite painful using OSGi. Although there are very interesting projects and approaches like OSGi enRoute, Pax Web or Equinox JSP (and probably a few more), taking a step beyond “Hello World” starts to get quite painful.

OSGi has had support for registering servlets for quite a while. And it becomes even smoother using the HTTP whiteboard approach. But writing a servlet is, in most cases, not what you actually want. It is more like wiring method calls, service methods calls, to URLs, finally rendering it to HTML. Looking at the Spring WebMVC framework, this can be as easy as annotating a class with some @Controller annotation, returning a redirect to a JSP page.

Living in OSGi land, this sounds even better. Dynamically registering and referencing controllers and services. Configuring the application on the fly, during runtime, a dream come true.

Pretty soon its gets quite frustrating from there on. Equinox JSP is not too bad, but suffers from the Equinox HTTP service implementation which has a few bugs and drawbacks. Pax Web is fine, but the whiteboard pattern, although the same name, has nothing to do with OSGi HTTP whiteboard. Most other tutorials around OSGi and HTTP focus on registering a servlet. Since this is pretty much the standard specification right now. Everything around JSP is self made for each framework and mostly works around issues in Apache Jasper. Since Jasper seems to be the only JSP implementation, but it is so deeply tied to JEE, that it is really hard to use it in a different environment. So most tools simply wrap classloaders and tweak “getResource” methods in order to let Jasper think it is in an JEE environment.

Looking at what other JEE applications do, it really seems that everybody does use Jasper. In different patched versions. Tomcat of course, JBoss (aka Wildfly), Glassfish an Geronimo. Also Equinox JSP and Pax Web have their own wrapped and patched Jasper version.

Now it comes to JSTL, sure, you want to have all the fuzz when you develop JEEish applications. Pax Web really does consider looking up dependent bundles for tag libraries. Where Equinox JSP only scans the “Bundle-ClassPath” jars. Apache Jasper however simply ignores the “core” JSTL tag library, although it might get detected on the class path.

Now the good point is, it’s OSGi, and with a little bit of effort you can throw different frameworks together into one pot. Taking Equinox as OSGi framework, Pax Web for providing the Http Service, Equinox JSP for a non-intrusive JSP servlet and a little bit of custom code for the Spring MVC controller like framework, Package Drone got a nice little web framework. The JSTL tags are provided by JBoss JSTL, which feature a OSGi version of the tags.

While the simple servlets are plain Pax Web registrations, including the Equinox JSP servlet, the Spring MVC like setup is a custom part of Package Drone, but with some reusability in mind. A main dispatcher servlet picks up all services which are registered with a @Controller annotation. Calls are simply routed to service methods. The result is a reference to a JSP page, which now actually is part of the controller bundle and not the servlet. The dispatcher mechanism takes care of this an on the one side alters the redirection to the JSP so that the bundle is part of the redirect path, and on the other side registers all relevant JSP resources in a bundle with the JSP servlet of Equinox JSP.

I took quite a while and cost some nerves … but it seems that the next version of Package Drone will have a web framework which is based on OSGi HttpService, supports controller style services and still feels a bit like JEE πŸ˜‰

My day with Google

I had the chance to apply for a job at Google and got invited for the on-site interview after passing the telephone interview. Now there are numerous blog posts out there, about getting a job at Google and cracking the code interview, etc … If you are looking for that, this blog post won’t help you. First of all, I didn’t get an offer, and second it is more about reflecting the on-site day. To be honest, I would never actually consider writing a blog post about such a topic. However, telling family and friends about this caused a bit of a wow-hype, simply because it was Google. So it’s time to make an exception.

If you want to get a glimpse what it will be like (or rather, how it was for me) read on … maybe this helps you to prepare yourself. And if you start reading … please read it completely!

So what was it like?

I had the on-site interview at the Google office in Munich. The office itself is quite impressive and the tour during lunch time was really fascinating. It is a bit like working in Disney Land. Most people are really nice and give you a great time. So wether or not you get the job, you still will have really exciting and fascinating day.

What about the interviews?

Well, that is totally different story. As already said, there are enough web sites out there to get you prepared for the code interview. And nobody asked those strange Google questions, everything was focused on code interviews. More precise, on solving algorithmic puzzles. And that’s it! Nothing about the past, the present or the future of your career.

Interestingly all people there were of the same type. A friend of mine, knowing a few people working at Google, warned me with something like these are the type of nerds you don’t want to work with. I did not understand what he meant. And I am pretty sure there are lots of people that don’t want to work with me either πŸ˜‰ So I just ignored this, which was a good thing, but understood what he meant afterwards. I still can’t put it into words, and from my perspective I would not warn anyone! But it is true, there really seems to be only one type personality/nerd/geek there. Maybe it was just a coincidence, but diversity looks different.

And then… ?

After I came out of the last interview and left, I was pretty sure that I won’t get an offer. And I was pretty unsure about the idea of working at Google. Wait, why that? Everything seems really cool, it was a great day … how can that be?

Of course I did ask a few questions myself, mostly the same to each person in order to compare the results for myself. I also found it quite strange that beside one or two questions, which were asked outside the interview process, nobody else asked questions. Having worked over 15 years in building complex software systems, I would have expected at least a single question about the facts that I listed in my resumΓ©, beside one question in the telephone interview, from a curious guy what had some experience in the same field. All the answers to the questions I asked, where pretty much the same (and sorry, but I won’t post the questions or the answers). This made me think about all of this from a different perspective.

Why, oh why?

I asked myself the question (again): Why do I want to work for Google? I came up with two simple answers: a) It is cool working place! – Yes, I saw that! It really is! b) You can create great things there! – And this is what I really doubt after being a day at Google and talking to the people there.

The second question is: Why do people actually leave Google? – If it is such a cool place, why do some people leave it? (Of course for some it might not be their choice, but for others it is). And I got this answered as well, not in the interview day at Google of course.

Innovation?

It all comes down to innovation. Maybe the term is a bit vague, but in the end, what I mean is … creating something new. Having great ideas, turning them into great products. And this is what my idea of a job at Google was.

The reality is different … based on the answers of my questions, on how their interview process works, what I have seen, experienced and heard from others … there is actually nothing innovative there. Everything is about slightly improving what is already there. And not everybody likes that. Which is a reason for people leaving Google.

Proof?

Of course this is all my talk after not getting an offer … do me a favor and read the next section as well! Of course I will not quote peoples answers and name who said what!

So just look at the bigger picture. What are the most successful Google products. And I am not talking about the financially successful products, just to most used, the most recognized … the first products which pop up into your mind when you hear Google. For me it is: the search engine, Android, Youtube, advertising (AdSense, AdWords, Analytics).

Sure, the search engine is their thing … and then:

Data from: Google Acquisitions at Wikipedia
Android August 17, 2005
YouTube October 9, 2006
DoubleClick April 13, 2007

The initial invention, idea, vision, innovation … what ever you name, it was not born at Google, it was acquired. Everything after is only a bit of improvement, but nothing new. Just check the full list at Wikipedia.

Interestingly Heise comes to the same conclusion after the Google I/O 2015 (see article, in German). The summary is: Google did only show “expected” advances but nothing visionary, exciting … truly innovative, just minor improvements.

So what?

So what am I saying? Don’t work for Google? No, absolutely not! I do not want to say that at all!

But you should re-think it from a different perspective. If you want to do “great things”, maybe it is better go somewhere else and get acquired by Google πŸ˜‰ Working for Google seems like a hype. Like a hype for the Apple Watch, or for Maker Bots … it is really cool. Until you have a closer look and be honest with yourself.

Did this change my view on what Google is doing? Not at all. I still like many products and think others could be improved a lot.

Now wait – wouldn’t this post look totally different if you got an offer?!

Sure it would! I guess the blog post would not even exist. Instead of writing about it, I would put the effort in changing things to the better. Writing a blog post would not achieve that.

Package Drone – what’s next?!

Every now and then there is some time for Package Drone. So let’s peek ahead what will happen in the next few weeks.

First of all, there is the Eclipse DemoCamp in Munich, at which Package Drone will he presented. So if you want to talk in person, come over and pay us a visit.

Also I have been working on version 0.8.0. The more you think about it, the more ideas you get of what could be improved. If I only got the time. But finally it is time for validation! Channels and artifacts can be validated and the outcome will be presented in red and yellow, and a lot more detail ;-). This is a first step towards more things we hope to achieve with validation, like rejecting content and proving resolution mechanisms. Quick fix your artifacts πŸ˜‰

Also there are a few enhancements to make it easier for new users to start with Package Drone. “Channel recipes” for example setup up and configure a channel for a specific purpose, just to name one.

Of course this is important since, with a little bit of luck, there will be an article in the upcoming German “Eclipse Magazin“, which might bring some new users. Helping them to have an easy start is always a good idea πŸ˜‰

The next version also brings a new way to upload artifacts. A plain simple HTTP request will do to upload a new artifact. While I would not call it “API”, it definitely is the starting point of exactly that. Planned is a command line client and already available is the Jenkins plugin for Package Drone. It allows to archive artifacts directly to a Package Drone channel, including adding some meta data of the build.

So, if you have more ideas, please raise an issue in the GitHub issue tracker.

Meanwhile @ Package Drone

Since Package Drone has its own home now, I would simple like to sum up here what progress Package Drone has made in the last few weeks.

First of all, the most recent release, as of now, is 0.4.0. The last two releases were mostly focused about the processing of zipped P2 repositories and what comes with that. These can be processed in two different ways now. Either using the Unzip adapter, which is more like a way of deep linking, but still allows one to access a P2 repository inside that ZIP artifact. The second way it the P2 repository unzipper aspect, which unzips bundles and features and create virtual child artifacts. The second approach makes these artifacts available to all other Package Drone functionality, but also modifies the original content but unzipping and creating new meta data. However both variants can be used at the same time!

There is also a setup for OpenShift, and a quickstart at the OpenShift Hub. So if you want to try out Package Drone, the most simplest ways, just create a free account at OpenShift und simply deploy a new Package Drone setup with a few clicks. Including the database setup.

If course there have been lots of things cleaned up and improved in the UI and the backend system, but this is more a topic for the actual release notes at GitHub.

So the question is, what the future will bring. One thing I would like to see Postgres again as a database. With the most recent Postgres JDBC driver and some help from my colleague, this might be feature appearing in one of the next versions. MySQL works fine, but also has a very bad behavior when it comes to BLOB support. And since all artifacts are stored in the database, this can cause some huge memory requirement. Hopefully Postgres does a better job here.

Of course there is also the idea of storing the artifacts separately in the file system. While this requires a little bit of extra processing when it comes to backup up your system, it might be right time to add a full backup and restore process to Package Drone. This would also solve the problem of how to switch between storage backends.

And of course, I you would like to help out, please report bugs and become a contributor πŸ˜‰