Wednesday, February 24, 2016

Extension/Service/Plugin mechanisms in Java

Since I started to deep dive into OSGi I was wondering more and more how frameworks that have some way of extension mechanism, e.g. Apache Camel where you can define your own endpoint or the Eclipse IDE with its plugins, handle finding and instantiating extensions. I remember very well a presentation from the JAX 2013, it was by Kai Tödter, where he showed the combination of Vaadin and OSGi. While the web app was running he could add and remove menu entries, just by starting and stopping the bundles.
For a while now I have taken a look at several approaches on how to create an extensible application and you can find resources for every single method. I want to give a medium sized (not short ;)) overview here of the different ways I know to make a Java application extensible. Also, I will add a list of advantages and disadvantages, from my point of view, to each method. For every method I try to give a simple example.
To avoid confusion, when I write about the advantages and disadvantages, I will write from the point of view, as if you want to provide this extension mechanism in your framework, not from the API consumer point of view.

Passing the object

This is the most obvious method. The framework defines a method which takes the SPI interface and you simply pass the object. Camel, next to other methods, makes use of this (example taken from the Camel FAQ):
CamelContext context = new DefaultCamelContext();
context.addComponent("foo", new FooComponent(context));
Internally, Camel doesn't do much magic (code taken from Camel on GitHub).
public void addComponent(String componentName, final Component component) {
    ObjectHelper.notNull(component, "component");
    synchronized (components) {
        if (components.containsKey(componentName)) {
            throw new IllegalArgumentException("Cannot add component as its already previously added: " + componentName);
        components.put(componentName, component);
        for (LifecycleStrategy strategy : lifecycleStrategies) {
            strategy.onComponentAdd(componentName, component);

        // keep reference to properties component up to date
        if (component instanceof PropertiesComponent && "properties".equals(componentName)) {
            propertiesComponent = (PropertiesComponent) component;
Every component has to have an unique name and is somehow bound to a lifecycle. Removal of a component is also possible, but has to be made somewhere from the user code.


  • Easy and straightforward
  • No need for an additional framework
  • Compiler checks for the correct interface


  • Access to central class (the plugin/service/component holder) is necessary
  • Allowing changes during the runtime is possible but complicated, since it has to be assured the component is removed everywhere
  • Your framework has to take care of the whole component lifecycle and any additional requirements it enforces

Interface and Reflection

This method is used quite often (basically it is also how the ServiceLoader works, see next section) and you can find it with small variances. The differences are where and how exactly interface and implementation name reach the application. Placing them somewhere inside a properties file or passing them to the framework during startup are most common. The implementation is then instantiated using reflection. Creating a context with an InitialContextFactory works like this e.g.:
  Properties env = new Properties();


  • Easy and straightforward
  • No need for an additional framework
  • No need to provide central class (in properties file approach)


  • No type safety (if text based)
  • Your framework has to take care of the whole lifecycle and any additional requirements it enforces
  • Check for correct wiring only during runtime (if text based, check either at startup or when the code is being called, where the former is better than the latter)


Frameworks using the java.util.ServiceLoader can also be found quite often. What the ServiceLoader does is, it uses during runtime a ClassLoader and checks the META-INF/services directory for a text file, whose name equals the passed interface (SPI) name and then reads the class name inside that file. Then it instantiates the class via Reflection. All the magic happens in the LazyIterator inside the ServiceLoader class (see OpenJDK). Basically, it's just reading a file and instantiating the object. E.g. Camel and HiveMQ use this method.


  • Easy and straightforward
  • ServiceLoader is part of JDK
  • No need for an additional framework


  • No lifecycle
  • Class has to provide standard constructor
  • Support for runtime changes must be implemented (as mentioned here)
  • Check for correct wiring only during runtime (the filename or the string inside the file could be wrong)

(Eclipse) Extension Points

Picture under BSD license, see here
As far as I know the concept of Extension Points never got popular outside Eclipse, although it is possible to include them in every application. To achieve loose coupling the definition of places where you can add your plugin and the plugins themselves is extracted into XML files.
To define an extension point you need something like this:
<?xml version="1.0" encoding="UTF-8"?>
<?eclipse version="3.4"?>
The extension provider then has to define an appropriate extension for that point:
<?xml version="1.0" encoding="UTF-8"?>
<?eclipse version="3.4"?>
I got to admit, that I am not completely sure how exactly you can integrate the extension points, but I guess you will need quite a lot from the basic Eclipse runtime. There is a blog post, which explains how you can use extension points without depending on OSGi.


  • Extensions can be added during runtime
  • Good tool support inside Eclipse
  • Wrong wiring only affects single extension
  • Loose coupling (more or less, since the extensions depend on the extension point id)


  • Dependencies to Eclipse
  • Overhead from the Eclipse platform (I actually cannot prove this point but I assume there must be a considerate overhead involved in comparison to the previous methods)
  • Check for correct wiring only during runtime

Spring XML

The Spring framework tried to find a way for loosely coupled components long before CDI, as we know it today, appeared. Their solution was an XML file in which the different classes are being wired together (I am well aware of the fact that nowadays there are also other ways, but since they are also based on annotations they don't differ enough from CDI as that I'll give them an own paragraph). In the basic XML file you define all your beans and Spring will take care of the instantiation. It is also possible to distribute the configuration among several XML files. A very simple example (taken and modified from the Spring documentation) looks like this:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns=""
    <bean id="accountDao"

    <bean id="petStore" class="">
        <property name="accountDao" ref="accountDao"/>
If you want to provide your users a way to add their services/plugins to the framework, you'll have to provide a setter method where the users can add their object. E.g. like this (taken from camunda documentation):
<bean id="processEngineConfiguration" class="org.camunda.bpm.engine.spring.SpringProcessEngineConfiguration">
  <property name="processEnginePlugins">
      <bean id="spinPlugin" class="org.camunda.spin.plugin.impl.SpinProcessEnginePlugin" />


  • Spring is lightweigth
  • Lifecycle support from Spring


  • XML needs to be maintained
  • No auto detection, users have to write the XML when they want to add something
  • The Spring IoC container is needed
  • Correct wiring is only checked at startup

OSGi Services

OSGi was created embracing runtime changes and bundles dynamically providing and removing their services. With this in mind OSGi strongly supports applications being extended by services, provided by different bundles. The simplest approach is to implement a ServiceListener or a ServiceTracker. Both should be created on bundle start and they will react when a new implementation of the service appears. A ServiceListener can be as simple as this (taken from the Knoplerfish tutorial):
 ServiceListener sl = new ServiceListener() {
   public void serviceChanged(ServiceEvent ev) {
      ServiceReference sr = ev.getServiceReference();
      switch(ev.getType()) {
        case ServiceEvent.REGISTERED:
             HttpService http = (HttpService)bc.getService(sr);

 String filter = "(objectclass=" + HttpService.class.getName() + ")";
 bc.addServiceListener(sl, filter);
Where bc is a BundleContext object. And a ServiceTracker can be used like this:
ServiceTracker<HttpService,HttpService> serviceTracker = new ServiceTracker<HttpService, HttpService>(bc, HttpService.class, null);;
There are more elegant ways to get hold of an OSGi service using Blueprint, Declarative Services or the Apache Felix Dependency Manager but the ServiceListener is the basic way.


  • OSGi lifecycle support
  • Changes during runtime "encouraged" ;)
  • Compiler checks wiring (not for the ServiceListener but for the rest)
  • Problems with services are restricted to single bundle


  • You have to buy the whole OSGi package: imports, exports, bundles and everything
  • Having the full OSGi lifecycle makes the world more complicated since every service can disappear at every moment

Note about PojoSR/OSGi Light

Since the biggest disadvantage of OSGi is that you have to get the whole package, I want to mention here another approach, which is called PojoSR or OSGi Light. The goal of it is to give you the OSGi service concept without the rest that comes with OSGi. Unfortunately, I could not find much documentation about it and the activity around this project seems to be very low at the moment. There is an article here and the PojoSR framework itself. Also, it looks like PojoSR is now a part of Apache Felix called "Connect", but its version is 0.1.0. So if anyone of you knows more about it, please let me know.


Contexts and Dependency injection was a big step for Java EE, allowing developers to write more loosely coupled code. The CDI container takes care of automagically wiring the different parts together. The developer only has to use the correct annotations. Depending on which CDI beans are present at runtime, concrete implementations can be changed without changing the code that uses them. When trying to use a class the basic injection looks like this:
private MyServiceInterface service;
If there is need to get all of the implementations (which we actually want here), then the class Instance must be used:
@Inject @Any
private Instance<MyServiceInterface> services;
Since Instance is an Iterable a simple for-each loop can be used to access all the objects. Alternatively the select() method can be used to further specify requirements.


  • Compiler checks for correct type
  • CDI container checks correct wiring at startup
  • Part of JEE standard but can also be used without application serve (use a JSR-330 implementation like Guice or HK2)r
  • CDI lifecycle support


  • A CDI container is needed
  • Changes during runtime are not possible
  • Annotatiomania (at least if you don't watch out)


As you can see many different frameworks/methods evolved in the Java ecosystem. Every single one with its specific advantages and disadvantages. I think we can summarize the different extension mechanisms as three types (with their members):
  1. String and well-known location ("Interface and Reflection", "ServiceLoader", "(Eclipse) Extension Points", "Spring XML")
  2. Programmatic wiring ("Passing the object", "Interface and Reflection", "OSGi Services")
  3. Classpath scanning ("CDI")
Of course the three types are not exclusive. You may provide your users more than one way and let them choose. Also CDI is not exactly the only framework that uses classpath scanning. Spring with its two other ways for configuring the IoC container relies on that method, too.

I hope this article provides an good and sufficient overview of the different methods on how to create an extensible framework. Choosing the right one will make your users surely happy. If you know another method, which I forgot, please let me know, I will gladly add it here.

Please note that the lists of advantages and disadvantages are based on my reasoning. I tried to be objective but like every programmer I have my favorites and my experiences with the frameworks that may make me a little bit biased.

Saturday, February 20, 2016

What can capabilities do for your processes?

Before we release camunda BPM OSGi 2.0 I want to do a little bit more of advertisement for it and show what is possible with the new version. One change in the new version will be, that it depends on OSGi 4.3 and no longer 4.2. One change, besides the fact that I can now use generics in the code (yay!) is that with OSGi 4.3 the capabilities headers will work. So, what's so impressive about them?

Capability headers

The capability headers are two header Provide-Capability and Require-Capability. They are a further abstraction of the Import-Package and Export-Package headers we all (should ;)) know. But with the capability headers you are not as limited as with the package headers. Arbitrary things can be defined, e.g.
Provide-Capability: sensor; type=gyro
would be a valid statement. But you are not limited to one attribute:
Provide-Capability: sensor; type=heat; minTemp=0; maxTemp=100
is also possible. And the bundle that requires such capabilities can use an LDAP filter expression:
Require-Capability: sensor; filter:="(&(type=type=heat)(minTemp=0)(maxTemp=100))"
That ways it is possible to find exactly what is needed in a way that allows to specify more than just packages and versions.
How can you use this for your business processes?

Capability headers for processes

One use-case that came quickly to my mind were process definitions that depend on each other, e.g. if you have a process with a call activity. An example could look like this (please excuse that I didn't prepare an exhaustive example):
Let's call this one the "Hunger process". And the callee process, the "Phone process" can be as simple as this:

The last time I checked there is nothing that would stop you to try to start the Hunger process although the Phone process hasn't been deployed yet. If the Hunger process would be something that you want to start automatically you would run into a nasty exception. Here, the headers can help. You could simply describe in your MANIFEST that you require the Phone process before your bundle can be started:
Require-Capability: process; filter:="(key=Phone_process)"
You could also add a version number or whatever seems useful. The bundle containing the Phone process should then of course contain the appropriate part:
Provide-Capability: process; key=Phone_process
So, when you deploy the bundle with the Hunger process it cannot be started without the bundle containing the Phone process. That ways you can manage your process interdependencies without running into exceptions.
Finally, if you use the maven-bundle-plugin I want to give you a short example.

Setting the headers with the maven-bundle-plugin

With the maven-bundle-plugin it is really easy to set the headers. I'll suppose that you use <packaging>bundle</packaging> in your POM. Here's how you can set the headers:
       <Provide-Capability>process; key=Phone_process</Provide-Capability>
See, piece of cake ;)

I hope I could give you some idea how you could use the capability headers that OSGi 4.3 introduced. This was just a quick example but I think it shows nicely, how OSGi can support your BPMN processes.

Saturday, February 13, 2016

camunda BPM OSGi - Event Bridge

I have implemented the eventing feature already some months ago but I haven't managed to advertise it a little bit more until now. So, let's praise my work ;)

I'll start with some background information, which you can skip if you're familiar with camunda BPM and the OSGi EventAdmin. Then, some information about the what and how follows.

Let's start with OSGi eventing.

OSGi Event Admin

The Event Admin is a part of the OSGi Compendium Specification. It is a way to communicate between bundles in a decoupled way by sending events. The communication follows a publish/subcribe scheme.

One bundle obtains the EventAdmin service, creates an Event object and sends it. Every event is created with a certain topic and can contain arbitrary String properties in a key-value way. Topics are hierarchical separated by a "/" and wildcards are allowed. E.g. org/osgi/framework/BundleEvent/STARTED is a topic used by the OSGi framework.

Events can be sent in a synchronous or asynchronous way and additional LDAP filters can be used based on the properties.

You can find a good example on the Apache Felix website.

Now that we know a little bit about the EventAdmin let's take a look at camunda BPM.

camunda BPM events

During the execution of a process certain events occur, e.g. a task is being assigned or a process end. To be able to "see" those events the user has to register either an ExecutionListener or a TaskListener (for more details see here and here).

The common way to register the listeners is to directly add them to the process definition, i.e. the .bpmn file. But there are certainly cases where we do not own the process file but would like to receive events (e.g. for monitoring).

Let's see how to achieve this in an OSGi environment.

camunda BPM OSGi - Event Bridge

I gotta admit the idea of an event bridge is not my own, because the CDI extension for camunda BPM already has an CDI event bridge. Anyways, for OSGi this feature was missing. I'll explain to you what happens internally and how you can use it.

What happens?

The OSGi event bridge implementation exports a service that is a BpmnParseListener. Whenever the engine parses a process definition this listener will become active and attach TaskListener and ExecutionListener wherever possible. But these listeners aren't full implementations. They are dynamic proxies with a special InvocationHandler.

When the InvocationHandler is being invoked it checks if the OSGi event bridge is still active and if the EventAdmin is present. If yes, it instantiates a new OSGiEventDistributor, which creates a new event and fills the properties.

I've tried to use all properties the camunda events provide and put them into the event properties. You can see a full list in this class.

This is basically what is happening. So, what can you do with the event bridge?

How to use it?

Before you can make use of the OSGi event bridge you have to add the OSGiEventBridgeActivator as a BpmnParseListener to your ProcessEngineConfiguration. You do this with the method setCustomPreBPMNParseListeners(). Unfortunately, there is no way to add the listener to an already created engine. After adding the listener events are being published. The event topics are:
  • org/camunda/bpm/extension/osgi/eventing/TaskEvent
  • org/camunda/bpm/extension/osgi/eventing/Execution
Of course you can use an asterisk after ../eventing/ to match both.

Wherever you want to listen to events, you can create your own EventHandler and subscribe to the topic you need/want. A simple example would be:

EventHandler eventHandler = new EventHandler() {
  public void handleEvent(Event event) {
    Logger.getLogger("Event occured: " + event.getTopic());
Dictionary props = new Hashtable();
props.put(org.osgi.service.event.EventConstants.EVENT_TOPIC, org.camunda.bpm.extension.osgi.eventing.api.Topics.ALL_EVENTING_EVENTS_TOPIC);
bundleContext.registerService(EventHandler.class.getName(), eventHandler, props);

Since many information is inside the event properties you can also use a more sophisticated LDAP filter expression based on that information. E.g. if you only want to receive events for a certain process you can do this:

EventHandler eventHandler = new EventHandler() {
Dictionary<String, String> props = new Hashtable<String, String>();props.put(EventConstants.EVENT_TOPIC, Topics.ALL_EVENTING_EVENTS_TOPIC);
props.put(EventConstants.EVENT_FILTER, "(processDefinitionId=invoice");
bundleContext.registerService(EventHandler.class.getName(), eventHandler, props);

And that's it. At the moment there is no way to limit the applications that are allowed to receive events, so everybody can see all the events if he subscribes to them. If you have an idea how to do this in a nice way, please let me know.

I hope you can make good use of the OSGi event bridge. My plan is to release camunda BPM OSGi 2.0.0 (which includes the event bridge) shortly after camunda BPM 7.5.0 is being released.


Copyright @ 2013 Wrong tracks of a developer.

Designed by Templateiy